Sample records for model selection maps

  1. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  2. The research of selection model based on LOD in multi-scale display of electronic map

    NASA Astrophysics Data System (ADS)

    Zhang, Jinming; You, Xiong; Liu, Yingzhen

    2008-10-01

    This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.

  3. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  4. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  5. Flood-inundation maps and updated components for a flood-warning system or the City of Marietta, Ohio and selected communities along the Lower Muskingum River and Ohio River

    USGS Publications Warehouse

    Whitehead, Matthew T.; Ostheimer, Chad J.

    2014-01-01

    Flood profiles for selected reaches were prepared by calibrating steady-state step-backwater models to selected streamgage rating curves. The step-backwater models were used to determine water-surface-elevation profiles for up to 12 flood stages at a streamgage with corresponding stream-flows ranging from approximately the 10- to 0.2-percent chance annual-exceedance probabilities for each of the 3 streamgages that correspond to the flood-inundation maps. Additional hydraulic modeling was used to account for the effects of backwater from the Ohio River on water levels in the Muskingum River. The computed longitudinal profiles of flood levels were used with a Geographic Information System digital elevation model (derived from light detection and ranging) to delineate flood-inundation areas. Digital maps showing flood-inundation areas overlain on digital orthophotographs were prepared for the selected floods.

  6. Mapping of the stochastic Lotka-Volterra model to models of population genetics and game theory

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2017-08-01

    The relationship between the M -species stochastic Lotka-Volterra competition (SLVC) model and the M -allele Moran model of population genetics is explored via timescale separation arguments. When selection for species is weak and the population size is large but finite, precise conditions are determined for the stochastic dynamics of the SLVC model to be mappable to the neutral Moran model, the Moran model with frequency-independent selection, and the Moran model with frequency-dependent selection (equivalently a game-theoretic formulation of the Moran model). We demonstrate how these mappings can be used to calculate extinction probabilities and the times until a species' extinction in the SLVC model.

  7. Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations

    PubMed Central

    Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.

    2018-01-01

    Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652

  8. Flood-hazard mapping in Honduras in response to Hurricane Mitch

    USGS Publications Warehouse

    Mastin, M.C.

    2002-01-01

    The devastation in Honduras due to flooding from Hurricane Mitch in 1998 prompted the U.S. Agency for International Development, through the U.S. Geological Survey, to develop a country-wide systematic approach of flood-hazard mapping and a demonstration of the method at selected sites as part of a reconstruction effort. The design discharge chosen for flood-hazard mapping was the flood with an average return interval of 50 years, and this selection was based on discussions with the U.S. Agency for International Development and the Honduran Public Works and Transportation Ministry. A regression equation for estimating the 50-year flood discharge using drainage area and annual precipitation as the explanatory variables was developed, based on data from 34 long-term gaging sites. This equation, which has a standard error of prediction of 71.3 percent, was used in a geographic information system to estimate the 50-year flood discharge at any location for any river in the country. The flood-hazard mapping method was demonstrated at 15 selected municipalities. High-resolution digital-elevation models of the floodplain were obtained using an airborne laser-terrain mapping system. Field verification of the digital elevation models showed that the digital-elevation models had mean absolute errors ranging from -0.57 to 0.14 meter in the vertical dimension. From these models, water-surface elevation cross sections were obtained and used in a numerical, one-dimensional, steady-flow stepbackwater model to estimate water-surface profiles corresponding to the 50-year flood discharge. From these water-surface profiles, maps of area and depth of inundation were created at the 13 of the 15 selected municipalities. At La Lima only, the area and depth of inundation of the channel capacity in the city was mapped. At Santa Rose de Aguan, no numerical model was created. The 50-year flood and the maps of area and depth of inundation are based on the estimated 50-year storm tide.

  9. On Voxel based Iso-Tumor Control Probabilty and Iso-Complication Maps for Selective Boosting and Selective Avoidance Intensity Modulated Radiotherapy.

    PubMed

    Kim, Yusung; Tomé, Wolfgang A

    2008-01-01

    Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.

  10. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  11. On Voxel based Iso-Tumor Control Probabilty and Iso-Complication Maps for Selective Boosting and Selective Avoidance Intensity Modulated Radiotherapy

    PubMed Central

    Kim, Yusung; Tomé, Wolfgang A.

    2010-01-01

    Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734

  12. Mapping landslide susceptibility using data-driven methods.

    PubMed

    Zêzere, J L; Pereira, S; Melo, R; Oliveira, S C; Garcia, R A C

    2017-07-01

    Most epistemic uncertainty within data-driven landslide susceptibility assessment results from errors in landslide inventories, difficulty in identifying and mapping landslide causes and decisions related with the modelling procedure. In this work we evaluate and discuss differences observed on landslide susceptibility maps resulting from: (i) the selection of the statistical method; (ii) the selection of the terrain mapping unit; and (iii) the selection of the feature type to represent landslides in the model (polygon versus point). The work is performed in a single study area (Silveira Basin - 18.2km 2 - Lisbon Region, Portugal) using a unique database of geo-environmental landslide predisposing factors and an inventory of 82 shallow translational slides. The logistic regression, the discriminant analysis and two versions of the information value were used and we conclude that multivariate statistical methods perform better when computed over heterogeneous terrain units and should be selected to assess landslide susceptibility based on slope terrain units, geo-hydrological terrain units or census terrain units. However, evidence was found that the chosen terrain mapping unit can produce greater differences on final susceptibility results than those resulting from the chosen statistical method for modelling. The landslide susceptibility should be assessed over grid cell terrain units whenever the spatial accuracy of landslide inventory is good. In addition, a single point per landslide proved to be efficient to generate accurate landslide susceptibility maps, providing the landslides are of small size, thus minimizing the possible existence of heterogeneities of predisposing factors within the landslide boundary. Although during last years the ROC curves have been preferred to evaluate the susceptibility model's performance, evidence was found that the model with the highest AUC ROC is not necessarily the best landslide susceptibility model, namely when terrain mapping units are heterogeneous in size and reduced in number. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A Probabilistic Strategy for Understanding Action Selection

    PubMed Central

    Kim, Byounghoon; Basso, Michele A.

    2010-01-01

    Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560

  14. Comparing Selections of Environmental Variables for Ecological Studies: A Focus on Terrain Attributes.

    PubMed

    Lecours, Vincent; Brown, Craig J; Devillers, Rodolphe; Lucieer, Vanessa L; Edinger, Evan N

    2016-01-01

    Selecting appropriate environmental variables is a key step in ecology. Terrain attributes (e.g. slope, rugosity) are routinely used as abiotic surrogates of species distribution and to produce habitat maps that can be used in decision-making for conservation or management. Selecting appropriate terrain attributes for ecological studies may be a challenging process that can lead users to select a subjective, potentially sub-optimal combination of attributes for their applications. The objective of this paper is to assess the impacts of subjectively selecting terrain attributes for ecological applications by comparing the performance of different combinations of terrain attributes in the production of habitat maps and species distribution models. Seven different selections of terrain attributes, alone or in combination with other environmental variables, were used to map benthic habitats of German Bank (off Nova Scotia, Canada). 29 maps of potential habitats based on unsupervised classifications of biophysical characteristics of German Bank were produced, and 29 species distribution models of sea scallops were generated using MaxEnt. The performances of the 58 maps were quantified and compared to evaluate the effectiveness of the various combinations of environmental variables. One of the combinations of terrain attributes-recommended in a related study and that includes a measure of relative position, slope, two measures of orientation, topographic mean and a measure of rugosity-yielded better results than the other selections for both methodologies, confirming that they together best describe terrain properties. Important differences in performance (up to 47% in accuracy measurement) and spatial outputs (up to 58% in spatial distribution of habitats) highlighted the importance of carefully selecting variables for ecological applications. This paper demonstrates that making a subjective choice of variables may reduce map accuracy and produce maps that do not adequately represent habitats and species distributions, thus having important implications when these maps are used for decision-making.

  15. Retrieval and Mapping of Heavy Metal Concentration in Soil Using Time Series Landsat 8 Imagery

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Xu, L.; Peng, J.; Wang, H.; Wong, A.; Clausi, D. A.

    2018-04-01

    Heavy metal pollution is a critical global environmental problem which has always been a concern. Traditional approach to obtain heavy metal concentration relying on field sampling and lab testing is expensive and time consuming. Although many related studies use spectrometers data to build relational model between heavy metal concentration and spectra information, and then use the model to perform prediction using the hyperspectral imagery, this manner can hardly quickly and accurately map soil metal concentration of an area due to the discrepancies between spectrometers data and remote sensing imagery. Taking the advantage of easy accessibility of Landsat 8 data, this study utilizes Landsat 8 imagery to retrieve soil Cu concentration and mapping its distribution in the study area. To enlarge the spectral information for more accurate retrieval and mapping, 11 single date Landsat 8 imagery from 2013-2017 are selected to form a time series imagery. Three regression methods, partial least square regression (PLSR), artificial neural network (ANN) and support vector regression (SVR) are used to model construction. By comparing these models unbiasedly, the best model are selected to mapping Cu concentration distribution. The produced distribution map shows a good spatial autocorrelation and consistency with the mining area locations.

  16. Custom map projections for regional groundwater models

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2017-01-01

    For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.

  17. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    PubMed Central

    Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo

    2007-01-01

    Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584

  18. Generative Topographic Mapping of Conformational Space.

    PubMed

    Horvath, Dragos; Baskin, Igor; Marcou, Gilles; Varnek, Alexandre

    2017-10-01

    Herein, Generative Topographic Mapping (GTM) was challenged to produce planar projections of the high-dimensional conformational space of complex molecules (the 1LE1 peptide). GTM is a probability-based mapping strategy, and its capacity to support property prediction models serves to objectively assess map quality (in terms of regression statistics). The properties to predict were total, non-bonded and contact energies, surface area and fingerprint darkness. Map building and selection was controlled by a previously introduced evolutionary strategy allowed to choose the best-suited conformational descriptors, options including classical terms and novel atom-centric autocorrellograms. The latter condensate interatomic distance patterns into descriptors of rather low dimensionality, yet precise enough to differentiate between close favorable contacts and atom clashes. A subset of 20 K conformers of the 1LE1 peptide, randomly selected from a pool of 2 M geometries (generated by the S4MPLE tool) was employed for map building and cross-validation of property regression models. The GTM build-up challenge reached robust three-fold cross-validated determination coefficients of Q 2 =0.7…0.8, for all modeled properties. Mapping of the full 2 M conformer set produced intuitive and information-rich property landscapes. Functional and folding subspaces appear as well-separated zones, even though RMSD with respect to the PDB structure was never used as a selection criterion of the maps. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Cognitive Processes in Orienteering: A Review.

    ERIC Educational Resources Information Center

    Seiler, Roland

    1996-01-01

    Reviews recent research on information processing and decision making in orienteering. The main cognitive demands investigated were selection of relevant map information for route choice, comparison between map and terrain in map reading and in relocation, and quick awareness of mistakes. Presents a model of map reading based on results. Contains…

  20. A proto-architecture for innate directionally selective visual maps.

    PubMed

    Adams, Samantha V; Harris, Chris M

    2014-01-01

    Self-organizing artificial neural networks are a popular tool for studying visual system development, in particular the cortical feature maps present in real systems that represent properties such as ocular dominance (OD), orientation-selectivity (OR) and direction selectivity (DS). They are also potentially useful in artificial systems, for example robotics, where the ability to extract and learn features from the environment in an unsupervised way is important. In this computational study we explore a DS map that is already latent in a simple artificial network. This latent selectivity arises purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We find DS maps with local patchy regions that exhibit features similar to maps derived experimentally and from previous modeling studies. We explore the consequences of changes to the afferent and lateral connectivity to establish the key features of this proto-architecture that support DS.

  1. Genetic parameters and signatures of selection in two divergent laying hen lines selected for feather pecking behaviour.

    PubMed

    Grams, Vanessa; Wellmann, Robin; Preuß, Siegfried; Grashorn, Michael A; Kjaer, Jörgen B; Bessei, Werner; Bennewitz, Jörn

    2015-09-30

    Feather pecking (FP) in laying hens is a well-known and multi-factorial behaviour with a genetic background. In a selection experiment, two lines were developed for 11 generations for high (HFP) and low (LFP) feather pecking, respectively. Starting with the second generation of selection, there was a constant difference in mean number of FP bouts between both lines. We used the data from this experiment to perform a quantitative genetic analysis and to map selection signatures. Pedigree and phenotypic data were available for the last six generations of both lines. Univariate quantitative genetic analyses were conducted using mixed linear and generalized mixed linear models assuming a Poisson distribution. Selection signatures were mapped using 33,228 single nucleotide polymorphisms (SNPs) genotyped on 41 HFP and 34 LFP individuals of generation 11. For each SNP, we estimated Wright's fixation index (FST). We tested the null hypothesis that FST is driven purely by genetic drift against the alternative hypothesis that it is driven by genetic drift and selection. The mixed linear model failed to analyze the LFP data because of the large number of 0s in the observation vector. The Poisson model fitted the data well and revealed a small but continuous genetic trend in both lines. Most of the 17 genome-wide significant SNPs were located on chromosomes 3 and 4. Thirteen clusters with at least two significant SNPs within an interval of 3 Mb maximum were identified. Two clusters were mapped on chromosomes 3, 4, 8 and 19. Of the 17 genome-wide significant SNPs, 12 were located within the identified clusters. This indicates a non-random distribution of significant SNPs and points to the presence of selection sweeps. Data on FP should be analysed using generalised linear mixed models assuming a Poisson distribution, especially if the number of FP bouts is small and the distribution is heavily peaked at 0. The FST-based approach was suitable to map selection signatures that need to be confirmed by linkage or association mapping.

  2. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    NASA Astrophysics Data System (ADS)

    Colas, Francis; Bessière, Pierre; Girard, Benoît

    2011-03-01

    In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.

  3. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  4. FragFit: a web-application for interactive modeling of protein segments into cryo-EM density maps.

    PubMed

    Tiemann, Johanna K S; Rose, Alexander S; Ismer, Jochen; Darvish, Mitra D; Hilal, Tarek; Spahn, Christian M T; Hildebrand, Peter W

    2018-05-21

    Cryo-electron microscopy (cryo-EM) is a standard method to determine the three-dimensional structures of molecular complexes. However, easy to use tools for modeling of protein segments into cryo-EM maps are sparse. Here, we present the FragFit web-application, a web server for interactive modeling of segments of up to 35 amino acids length into cryo-EM density maps. The fragments are provided by a regularly updated database containing at the moment about 1 billion entries extracted from PDB structures and can be readily integrated into a protein structure. Fragments are selected based on geometric criteria, sequence similarity and fit into a given cryo-EM density map. Web-based molecular visualization with the NGL Viewer allows interactive selection of fragments. The FragFit web-application, accessible at http://proteinformatics.de/FragFit, is free and open to all users, without any login requirements.

  5. Chapter 5. Using Habitat Models for Habitat Mapping and Monitoring

    Treesearch

    Samuel A. Cushman; Timothy J. Mersmann; Gretchen G. Moisen; Kevin S. McKelvey; Christina D. Vojta

    2013-01-01

    This chapter provides guidance for applying existing habitat models to map and monitor wildlife habitat. Chapter 2 addresses the use of conceptual models to create a solid foundation for selecting habitat attributes to monitor and to translate these attributes into quantifiable and reportable monitoring measures. Most wildlife species, however, require a complex suite...

  6. Habitat selection of Rocky Mountain elk in a nonforested environment

    USGS Publications Warehouse

    Sawyer, H.; Nielson, R.M.; Lindzey, F.G.; Keith, L.; Powell, J.H.; Abraham, A.A.

    2007-01-01

    Recent expansions by Rocky Mountain elk (Cervus elaphus) into nonforested habitats across the Intermountain West have required managers to reconsider the traditional paradigms of forage and cover as they relate to managing elk and their habitats. We examined seasonal habitat selection patterns of a hunted elk population in a nonforested high-desert region of southwestern Wyoming, USA. We used 35,246 global positioning system locations collected from 33 adult female elk to model probability of use as a function of 6 habitat variables: slope, aspect, elevation, habitat diversity, distance to shrub cover, and distance to road. We developed resource selection probability functions for individual elk, and then we averaged the coefficients to estimate population-level models for summer and winter periods. We used the population-level models to generate predictive maps by assigning pixels across the study area to 1 of 4 use categories (i.e., high, medium-high, medium-low, or low), based on quartiles of the predictions. Model coefficients and predictive maps indicated that elk selected for summer habitats characterized by higher elevations in areas of high vegetative diversity, close to shrub cover, northerly aspects, moderate slopes, and away from roads. Winter habitat selection patterns were similar, except elk shifted to areas with lower elevations and southerly aspects. We validated predictive maps by using 528 locations collected from an independent sample of radiomarked elk (n = 55) and calculating the proportion of locations that occurred in each of the 4 use categories. Together, the high- and medium-high use categories of the summer and winter predictive maps contained 92% and 74% of summer and winter elk locations, respectively. Our population-level models and associated predictive maps were successful in predicting winter and summer habitat use by elk in a nonforested environment. In the absence of forest cover, elk seemed to rely on a combination of shrubs, topography, and low human disturbance to meet their thermal and hiding cover requirements.

  7. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  8. Meteorological Effects of Land Cover Changes in Hungary during the 20th Century

    NASA Astrophysics Data System (ADS)

    Drüszler, Á.; Vig, P.; Csirmaz, K.

    2012-04-01

    Geological, paleontological and geomorphologic studies show that the Earth's climate has always been changing since it came into existence. The climate change itself is self-evident. Therefore the far more serious question is how much does mankind strengthen or weaken these changes beyond the natural fluctuation and changes of climate. The aim of the present study was to restore the historical land cover changes and to simulate the meteorological consequences of these changes. Two different land cover maps for Hungary were created in vector data format using GIS technology. The land cover map for 1900 was reconstructed based on statistical data and two different historical maps: the derived map of the 3rd Military Mapping Survey of the Austro-Hungarian Empire and the Synoptic Forestry Map of the Kingdom of Hungary. The land cover map for 2000 was derived from the CORINE land cover database. Significant land cover changes were found in Hungary during the 20th century according to the examinations of these maps and statistical databases. The MM5 non-hydrostatic dynamic model was used to further evaluate the meteorological effects of these changes. The lower boundary conditions for this mesoscale model were generated for two selected time periods (for 1900 and 2000) based on the reconstructed maps. The dynamic model has been run with the same detailed meteorological conditions of selected days from 2006 and 2007, but with modified lower boundary conditions. The set of the 26 selected initial conditions represents the whole set of the macrosynoptic situations for Hungary. In this way, 2×26 "forecasts" were made with 48 hours of integration. The effects of land cover changes under different weather situations were further weighted by the long-term (1961-1990) mean frequency of the corresponding macrosynoptic types, to assume the climatic effects from these stratified averages. The detailed evaluation of the model results were made for three different meteorological variables (temperature, dew point and precipitation).

  9. Multi-Collinearity Based Model Selection for Landslide Susceptibility Mapping: A Case Study from Ulus District of Karabuk, Turkey

    NASA Astrophysics Data System (ADS)

    Sahin, E. K.; Colkesen, I., , Dr; Kavzoglu, T.

    2017-12-01

    Identification of localities prone to landslide areas plays an important role for emergency planning, disaster management and recovery planning. Due to its great importance for disaster management, producing accurate and up-to-date landslide susceptibility maps is essential for hazard mitigation purpose and regional planning. The main objective of the present study was to apply multi-collinearity based model selection approach for the production of a landslide susceptibility map of Ulus district of Karabuk, Turkey. It is a fact that data do not contain enough information to describe the problem under consideration when the factors are highly correlated with each other. In such cases, choosing a subset of the original features will often lead to better performance. This paper presents multi-collinearity based model selection approach to deal with the high correlation within the dataset. Two collinearity diagnostic factors (Tolerance (TOL) and the Variance Inflation Factor (VIF)) are commonly used to identify multi-collinearity. Values of VIF that exceed 10.0 and TOL values less than 1.0 are often regarded as indicating multi-collinearity. Five causative factors (slope length, curvature, plan curvature, profile curvature and topographical roughness index) were found highly correlated with each other among 15 factors available for the study area. As a result, the five correlated factors were removed from the model estimation, and performances of the models including the remaining 10 factors (aspect, drainage density, elevation, lithology, land use/land cover, NDVI, slope, sediment transport index, topographical position index and topographical wetness index) were evaluated using logistic regression. The performance of prediction model constructed with 10 factors was compared to that of 15-factor model. The prediction performance of two susceptibility maps was evaluated by overall accuracy and the area under the ROC curve (AUC) values. Results showed that overall accuracy and AUC was calculated as 77.15% and 96.62% respectively for the model with 10 selected factors whilst they were estimated as 73.45% and 89.45% respectively for the model with all factors. It is clear that the multi-collinearity based model outperformed the conventional model in the mapping of landslide susceptibility.

  10. Marine Air Penetration: The Effect of Synoptic-scale Change on Regional Climate

    NASA Astrophysics Data System (ADS)

    Wang, M.; Ullrich, P. A.

    2016-12-01

    Marine air penetration (MAP) around the California San Francisco Bay Delta region has a pronounced impact on local temperature and air quality, and is highly correlated with inland wind penetration and hence wind power generation. Observational MAP criteria are defined based on the 900hPa across-shore wind speed greater than or equal to 3m/s at the Oakland radiosonde station, and a surface temperature difference greater than or equal to 7 degrees Celsius between two California Irrigation Management Information System (CIMIS) stations at Fresno, CA and Lodi, CA. This choice reflects marine cooling of Lodi, and was found to be highly correlated with inland specific humidity and breeze front activity. The observational MAP criteria were tuned based on small biases from Climate Forecast System Reanalysis (CFSR) to selected MAP days from CFSR, to identify synoptic-scale indicators associated with MAP events. A multivariate logistic regression model was constructed based on the selected five synoptic indicators from CFSR and demonstrated good model performance. Two synoptic-scale patterns were identified and analyzed out of the 32 categories from the regression model, suggesting a strong influence from the off-shore trough and the inland thermal ridge on MAP events. Future projection of MAP events included the 21st century Coupled Model Intercomparison Project Phase 5 (CMIP5), and Variable resolution in the Community Earth System Model (VR-CESM). Both showed no statistically significant trend associated with MAP events through the end of this century under both Representative Concentration Pathways (RCP) 2.6 and RCP 8.5.

  11. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  12. Using remote sensing in support of environmental management: A framework for selecting products, algorithms and methods.

    PubMed

    de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn

    2016-11-01

    Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.

  14. A new high resolution permafrost map of Iceland from Earth Observation data

    NASA Astrophysics Data System (ADS)

    Barnie, Talfan; Conway, Susan; Balme, Matt; Graham, Alastair

    2017-04-01

    High resolution maps of permafrost are required for ongoing monitoring of environmental change and the resulting hazards to ecosystems, people and infrastructure. However, permafrost maps are difficult to construct - direct observations require maintaining networks of sensors and boreholes in harsh environments and are thus limited in extent in space and time, and indirect observations require models or assumptions relating the measurements (e.g. weather station air temperature, basal snow temperature) to ground temperature. Operationally produced Land Surface Temperature maps from Earth Observation data can be used to make spatially contiguous estimates of mean annual skin temperature, which has been used a proxy for the presence of permafrost. However these maps are subject to biases due to (i) selective sampling during the day due to limited satellite overpass times, (ii) selective sampling over the year due to seasonally varying cloud cover, (iii) selective sampling of LST only during clearsky conditions, (iv) errors in cloud masking (v) errors in temperature emissivity separation (vi) smoothing over spatial variability. In this study we attempt to compensate for some of these problems using a bayesian modelling approach and high resolution topography-based downscaling.

  15. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  16. User Guide for the Anvil Threat Cooridor Forecast Tool V2.4 for AWIPS

    NASA Technical Reports Server (NTRS)

    Barett, Joe H., III; Bauman, William H., III

    2008-01-01

    The Anvil Tool GUI allows users to select a Data Type, toggle the map refresh on/off, place labels, and choose the Profiler Type (source of the KSC 50 MHz profiler data), the Date- Time of the data, the Center of Plot, and the Station (location of the RAOB or 50 MHz profiler). If the Data Type is Models, the user selects a Fcst Hour (forecast hour) instead of Station. There are menus for User Profiles, Circle Label Options, and Frame Label Options. Labels can be placed near the center circle of the plot and/or at a specified distance and direction from the center of the circle (Center of Plot). The default selection for the map refresh is "ON". When the user creates a new Anvil Tool map with Refresh Map "ON, the plot is automatically displayed in the AWIPS frame. If another Anvil Tool map is already displayed and the user does not change the existing map number shown at the bottom of the GUI, the new Anvil Tool map will overwrite the old one. If the user turns the Refresh Map "OFF", the new Anvil Tool map is created but not automatically displayed. The user can still display the Anvil Tool map through the Maps dropdown menu* as shown in Figure 4.

  17. A visual salience map in the primate frontal eye field.

    PubMed

    Thompson, Kirk G; Bichot, Narcisse P

    2005-01-01

    Models of attention and saccade target selection propose that within the brain there is a topographic map of visual salience that combines bottom-up and top-down influences to identify locations for further processing. The results of a series of experiments with monkeys performing visual search tasks have identified a population of frontal eye field (FEF) visually responsive neurons that exhibit all of the characteristics of a visual salience map. The activity of these FEF neurons is not sensitive to specific features of visual stimuli; but instead, their activity evolves over time to select the target of the search array. This selective activation reflects both the bottom-up intrinsic conspicuousness of the stimuli and the top-down knowledge and goals of the viewer. The peak response within FEF specifies the target for the overt gaze shift. However, the selective activity in FEF is not in itself a motor command because the magnitude of activation reflects the relative behavioral significance of the different stimuli in the visual scene and occurs even when no saccade is made. Identifying a visual salience map in FEF validates the theoretical concept of a salience map in many models of attention. In addition, it strengthens the emerging view that FEF is not only involved in producing overt gaze shifts, but is also important for directing covert spatial attention.

  18. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  19. Near-real-time simulation and internet-based delivery of forecast-flood inundation maps using two-dimensional hydraulic modeling--A pilot study for the Snoqualmie River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.

    2002-01-01

    A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.

  20. Implications of allometric model selection for county-level biomass mapping.

    PubMed

    Duncanson, Laura; Huang, Wenli; Johnson, Kristofer; Swatantran, Anu; McRoberts, Ronald E; Dubayah, Ralph

    2017-10-18

    Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1): 12-35, 2003), Chojnacky et al. (Forestry 87(1): 129-151, 2014) and the US Forest Service's Component Ratio Method (CRM). We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25-45 m). The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m), while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more appropriate for biomass mapping with lidar. These results confirm that allometric model selection considerably impacts biomass maps and estimates, and that allometric model errors remain poorly understood. Our findings that allometric model discrepancies are not explained by lidar heights suggests that allometric model form does not drive these discrepancies. A better understanding of the sources of allometric model errors, particularly in high biomass systems, is essential for improved forest biomass mapping.

  1. A first generation BAC-based physical map of the rainbow trout genome

    PubMed Central

    Palti, Yniv; Luo, Ming-Cheng; Hu, Yuqin; Genet, Carine; You, Frank M; Vallejo, Roger L; Thorgaard, Gary H; Wheeler, Paul A; Rexroad, Caird E

    2009-01-01

    Background Rainbow trout (Oncorhynchus mykiss) are the most-widely cultivated cold freshwater fish in the world and an important model species for many research areas. Coupling great interest in this species as a research model with the need for genetic improvement of aquaculture production efficiency traits justifies the continued development of genomics research resources. Many quantitative trait loci (QTL) have been identified for production and life-history traits in rainbow trout. A bacterial artificial chromosome (BAC) physical map is needed to facilitate fine mapping of QTL and the selection of positional candidate genes for incorporation in marker-assisted selection (MAS) for improving rainbow trout aquaculture production. This resource will also facilitate efforts to obtain and assemble a whole-genome reference sequence for this species. Results The physical map was constructed from DNA fingerprinting of 192,096 BAC clones using the 4-color high-information content fingerprinting (HICF) method. The clones were assembled into physical map contigs using the finger-printing contig (FPC) program. The map is composed of 4,173 contigs and 9,379 singletons. The total number of unique fingerprinting fragments (consensus bands) in contigs is 1,185,157, which corresponds to an estimated physical length of 2.0 Gb. The map assembly was validated by 1) comparison with probe hybridization results and agarose gel fingerprinting contigs; and 2) anchoring large contigs to the microsatellite-based genetic linkage map. Conclusion The production and validation of the first BAC physical map of the rainbow trout genome is described in this paper. We are currently integrating this map with the NCCCWA genetic map using more than 200 microsatellites isolated from BAC end sequences and by identifying BACs that harbor more than 300 previously mapped markers. The availability of an integrated physical and genetic map will enable detailed comparative genome analyses, fine mapping of QTL, positional cloning, selection of positional candidate genes for economically important traits and the incorporation of MAS into rainbow trout breeding programs. PMID:19814815

  2. Natural speech reveals the semantic maps that tile human cerebral cortex

    PubMed Central

    Huth, Alexander G.; de Heer, Wendy A.; Griffiths, Thomas L.; Theunissen, Frédéric E.; Gallant, Jack L.

    2016-01-01

    The meaning of language is represented in regions of the cerebral cortex collectively known as the “semantic system”. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modeling of fMRI data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that appear consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain. PMID:27121839

  3. Hydraulic model and flood-inundation maps developed for the Pee Dee National Wildlife Refuge, North Carolina

    USGS Publications Warehouse

    Smith, Douglas G.; Wagner, Chad R.

    2016-04-08

    A series of digital flood-inundation maps were developed on the basis of the water-surface profiles produced by the model. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Program Web site at http://water.usgs.gov/osw/flood_inundation, depict estimates of the areal extent and depth of flooding corresponding to selected water levels at the USGS streamgage Pee Dee River at Pee Dee Refuge near Ansonville, N.C. These maps, when combined with real-time water-level information from USGS streamgages, provide managers with critical information to help plan flood-response activities and resource protection efforts.

  4. Optimization of Causative Factors for Landslide Susceptibility Evaluation Using Remote Sensing and GIS Data in Parts of Niigata, Japan.

    PubMed

    Dou, Jie; Tien Bui, Dieu; Yunus, Ali P; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan

    2015-01-01

    This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner.

  5. Optimization of Causative Factors for Landslide Susceptibility Evaluation Using Remote Sensing and GIS Data in Parts of Niigata, Japan

    PubMed Central

    Dou, Jie; Tien Bui, Dieu; P. Yunus, Ali; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan

    2015-01-01

    This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner. PMID:26214691

  6. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  7. Construction of adhesion maps for contacts between a sphere and a half-space: Considering size effects of the sphere.

    PubMed

    Zhang, Yuyan; Wang, Xiaoli; Li, Hanqing; Yang, Weixu

    2015-11-15

    Previous adhesion maps, such as the JG (Johnson-Greenwood) and YCG (Yao-Ciavarella-Gao) maps, are used to guide the selection of Bradley, DMT, M-D, JKR and Hertz models. However, when the size of the contact sphere decreases to the small scale, the applicability of JG and YCG maps is limited because the assumptions regarding the contact region profile, interaction between contact bodies and sphere shape in the classical models constituting these two maps are no longer valid. To avoid this limitation, in this paper, a new numerical model considering size effects of the sphere is established first and then introduced into the new adhesion maps together with the YGG (Yao-Guduru-Gao) model and Hertz model. Regimes of these models in the new map under a certain sphere radius are demarcated by the criteria related to the relative force differences and the ratio of contact radius to sphere radius. In addition, the approaches at pull-off, jump-in and jump-out for different Tabor parameters and sphere radii are provided in the new maps. Finally, to make the new maps more feasible, the numerical results of approaches, force and contact radius involved in the maps are formularized by using the piecewise fitting. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Implications of allometric model selection for county-level biomass mapping

    Treesearch

    Laura Duncanson; Wenli Huang; Kristofer Johnson; Anu Swatantran; Ronald E. McRoberts; Ralph Dubayah

    2017-01-01

    Background: Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend...

  9. Analysis of microarray leukemia data using an efficient MapReduce-based K-nearest-neighbor classifier.

    PubMed

    Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar

    2016-04-01

    Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  11. Universal transition from unstructured to structured neural maps

    PubMed Central

    Sartori, Fabio; Cuntz, Hermann

    2017-01-01

    Neurons sharing similar features are often selectively connected with a higher probability and should be located in close vicinity to save wiring. Selective connectivity has, therefore, been proposed to be the cause for spatial organization in cortical maps. Interestingly, orientation preference (OP) maps in the visual cortex are found in carnivores, ungulates, and primates but are not found in rodents, indicating fundamental differences in selective connectivity that seem unexpected for closely related species. Here, we investigate this finding by using multidimensional scaling to predict the locations of neurons based on minimizing wiring costs for any given connectivity. Our model shows a transition from an unstructured salt-and-pepper organization to a pinwheel arrangement when increasing the number of neurons, even without changing the selectivity of the connections. Increasing neuronal numbers also leads to the emergence of layers, retinotopy, or ocular dominance columns for the selective connectivity corresponding to each arrangement. We further show that neuron numbers impact overall interconnectivity as the primary reason for the appearance of neural maps, which we link to a known phase transition in an Ising-like model from statistical mechanics. Finally, we curated biological data from the literature to show that neural maps appear as the number of neurons in visual cortex increases over a wide range of mammalian species. Our results provide a simple explanation for the existence of salt-and-pepper arrangements in rodents and pinwheel arrangements in the visual cortex of primates, carnivores, and ungulates without assuming differences in the general visual cortex architecture and connectivity. PMID:28468802

  12. Dam-breach analysis and flood-inundation mapping for selected dams in Oklahoma City, Oklahoma, and near Atoka, Oklahoma

    USGS Publications Warehouse

    Shivers, Molly J.; Smith, S. Jerrod; Grout, Trevor S.; Lewis, Jason M.

    2015-01-01

    Digital-elevation models, field survey measurements, hydraulic data, and hydrologic data (U.S. Geological Survey streamflow-gaging stations North Canadian River below Lake Overholser near Oklahoma City, Okla. [07241000], and North Canadian River at Britton Road at Oklahoma City, Okla. [07241520]), were used as inputs for the one-dimensional dynamic (unsteady-flow) models using Hydrologic Engineering Centers River Analysis System (HEC–RAS) software. The modeled flood elevations were exported to a geographic information system to produce flood-inundation maps. Water-surface profiles were developed for a 75-percent probable maximum flood dam-breach scenario and a sunny-day dam-breach scenario, as well as for maximum flood-inundation elevations and flood-wave arrival times at selected bridge crossings. Points of interest such as community-services offices, recreational areas, water-treatment plants, and wastewater-treatment plants were identified on the flood-inundation maps.

  13. Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis.

    PubMed

    Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani

    2014-02-15

    Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Modeling soil organic matter (SOM) from satellite data using VISNIR-SWIR spectroscopy and PLS regression with step-down variable selection algorithm: case study of Campos Amazonicos National Park savanna enclave, Brazil

    NASA Astrophysics Data System (ADS)

    Rosero-Vlasova, O.; Borini Alves, D.; Vlassova, L.; Perez-Cabello, F.; Montorio Lloveria, R.

    2017-10-01

    Deforestation in Amazon basin due, among other factors, to frequent wildfires demands continuous post-fire monitoring of soil and vegetation. Thus, the study posed two objectives: (1) evaluate the capacity of Visible - Near InfraRed - ShortWave InfraRed (VIS-NIR-SWIR) spectroscopy to estimate soil organic matter (SOM) in fire-affected soils, and (2) assess the feasibility of SOM mapping from satellite images. For this purpose, 30 soil samples (surface layer) were collected in 2016 in areas of grass and riparian vegetation of Campos Amazonicos National Park, Brazil, repeatedly affected by wildfires. Standard laboratory procedures were applied to determine SOM. Reflectance spectra of soils were obtained in controlled laboratory conditions using Fieldspec4 spectroradiometer (spectral range 350nm- 2500nm). Measured spectra were resampled to simulate reflectances for Landsat-8, Sentinel-2 and EnMap spectral bands, used as predictors in SOM models developed using Partial Least Squares regression and step-down variable selection algorithm (PLSR-SD). The best fit was achieved with models based on reflectances simulated for EnMap bands (R2=0.93; R2cv=0.82 and NMSE=0.07; NMSEcv=0.19). The model uses only 8 out of 244 predictors (bands) chosen by the step-down variable selection algorithm. The least reliable estimates (R2=0.55 and R2cv=0.40 and NMSE=0.43; NMSEcv=0.60) resulted from Landsat model, while Sentinel-2 model showed R2=0.68 and R2cv=0.63; NMSE=0.31 and NMSEcv=0.38. The results confirm high potential of VIS-NIR-SWIR spectroscopy for SOM estimation. Application of step-down produces sparser and better-fit models. Finally, SOM can be estimated with an acceptable accuracy (NMSE 0.35) from EnMap and Sentinel-2 data enabling mapping and analysis of impacts of repeated wildfires on soils in the study area.

  16. Trajectories of Attentional Development: An Exploration with the Master Activation Map Model

    ERIC Educational Resources Information Center

    Michael, George A.; Lete, Bernard; Ducrot, Stephanie

    2013-01-01

    The developmental trajectories of several attention components, such as orienting, inhibition, and the guidance of selection by relevance (i.e., advance knowledge relevant to the task) were investigated in 498 participants (ages 7, 8, 9, 10, 11, and 20). The paradigm was based on Michael et al.'s (2006) master activation map model and consisted of…

  17. Implementation of Multi-Agent Object Attention System Based on Biologically Inspired Attractor Selection

    NASA Astrophysics Data System (ADS)

    Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao

    A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.

  18. Polder maps: Improving OMIT maps by excluding bulk solvent

    DOE PAGES

    Liebschner, Dorothee; Afonine, Pavel V.; Moriarty, Nigel W.; ...

    2017-02-01

    The crystallographic maps that are routinely used during the structure-solution workflow are almost always model-biased because model information is used for their calculation. As these maps are also used to validate the atomic models that result from model building and refinement, this constitutes an immediate problem: anything added to the model will manifest itself in the map and thus hinder the validation. OMIT maps are a common tool to verify the presence of atoms in the model. The simplest way to compute an OMIT map is to exclude the atoms in question from the structure, update the corresponding structure factorsmore » and compute a residual map. It is then expected that if these atoms are present in the crystal structure, the electron density for the omitted atoms will be seen as positive features in this map. This, however, is complicated by the flat bulk-solvent model which is almost universally used in modern crystallographic refinement programs. This model postulates constant electron density at any voxel of the unit-cell volume that is not occupied by the atomic model. Consequently, if the density arising from the omitted atoms is weak then the bulk-solvent model may obscure it further. A possible solution to this problem is to prevent bulk solvent from entering the selected OMIT regions, which may improve the interpretative power of residual maps. This approach is called a polder (OMIT) map. Polder OMIT maps can be particularly useful for displaying weak densities of ligands, solvent molecules, side chains, alternative conformations and residues both in terminal regions and in loops. As a result, the tools described in this manuscript have been implemented and are available in PHENIX.« less

  19. Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model

    PubMed Central

    Seo, Dong Gi; Weiss, David J.

    2015-01-01

    Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848

  20. Geospatial Predictive Modelling for Climate Mapping of Selected Severe Weather Phenomena Over Poland: A Methodological Approach

    NASA Astrophysics Data System (ADS)

    Walawender, Ewelina; Walawender, Jakub P.; Ustrnul, Zbigniew

    2017-02-01

    The main purpose of the study is to introduce methods for mapping the spatial distribution of the occurrence of selected atmospheric phenomena (thunderstorms, fog, glaze and rime) over Poland from 1966 to 2010 (45 years). Limited in situ observations as well the discontinuous and location-dependent nature of these phenomena make traditional interpolation inappropriate. Spatially continuous maps were created with the use of geospatial predictive modelling techniques. For each given phenomenon, an algorithm identifying its favourable meteorological and environmental conditions was created on the basis of observations recorded at 61 weather stations in Poland. Annual frequency maps presenting the probability of a day with a thunderstorm, fog, glaze or rime were created with the use of a modelled, gridded dataset by implementing predefined algorithms. Relevant explanatory variables were derived from NCEP/NCAR reanalysis and downscaled with the use of a Regional Climate Model. The resulting maps of favourable meteorological conditions were found to be valuable and representative on the country scale but at different correlation ( r) strength against in situ data (from r = 0.84 for thunderstorms to r = 0.15 for fog). A weak correlation between gridded estimates of fog occurrence and observations data indicated the very local nature of this phenomenon. For this reason, additional environmental predictors of fog occurrence were also examined. Topographic parameters derived from the SRTM elevation model and reclassified CORINE Land Cover data were used as the external, explanatory variables for the multiple linear regression kriging used to obtain the final map. The regression model explained 89 % of annual frequency of fog variability in the study area. Regression residuals were interpolated via simple kriging.

  1. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  2. Mapping and energization in the magnetotail. II - Particle acceleration

    NASA Technical Reports Server (NTRS)

    Kaufmann, Richard L.; Larson, Douglas J.; Lu, Chen

    1993-01-01

    Mapping with the Tsyganenko (1989) or T89 magnetosphere model has been examined previously. In the present work, an attempt is made to evaluate quantitatively what the selection of T89 implies for steady-state particle energization. The Heppner and Maynard (1987) or HM87 electric field model is mapped from the ionosphere to the equatorial plane, and the electric currents associated with T89 are evaluated. Consideration is also given to the nature of the acceleration that occurs when cross-tail current is suddenly diverted to the ionosphere.

  3. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  4. Substituted N-aryl-6-pyrimidinones: A new class of potent, selective, and orally active p38 MAP kinase inhibitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devadas, Balekudru; Selness, Shaun R.; Xing, Li

    2012-02-28

    A novel series of highly potent and selective p38 MAP kinase inhibitors was developed originating from a substituted N-aryl-6-pyrimidinone scaffold. SAR studies coupled with in vivo evaluations in rat arthritis model culminated in the identification of 10 with excellent oral efficacy. Compound 10 exhibited a significantly enhanced dissolution rate compared to 1, translating to a high oral bioavailability (>90%) in rat. In animal studies 10 inhibited LPS-stimulated production of tumor necrosis factor-{alpha} in a dose-dependent manner and demonstrated robust efficacy comparable to dexamethasone in a rat streptococcal cell wall-induced arthritis model.

  5. The Lunar Mapping and Modeling Project

    NASA Technical Reports Server (NTRS)

    Noble, Sarah K.; French, R. A.; Nall, M. E.; Muery, K. G.

    2009-01-01

    The Lunar Mapping and Modeling Project (LMMP) has been created to manage the development of a suite of lunar mapping and modeling products that support the Constellation Program (CxP) and other lunar exploration activities, including the planning, design, development, test and operations associated with lunar sortie missions, crewed and robotic operations on the surface, and the establishment of a lunar outpost. The information provided through LMMP will assist CxP in: planning tasks in the areas of landing site evaluation and selection, design and placement of landers and other stationary assets, design of rovers and other mobile assets, developing terrain-relative navigation (TRN) capabilities, and assessment and planning of science traverses.

  6. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    USGS Publications Warehouse

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  7. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  8. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  9. Combining non selective gas sensors on a mobile robot for identification and mapping of multiple chemical compounds.

    PubMed

    Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco

    2014-09-17

    In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.

  10. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in; Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030; Bansod, Baban K.S.

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models havemore » been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.« less

  11. Bouguer gravity anomaly and isostatic residual gravity maps of the Tonopah 1 degree by 2 degrees Quadrangle, central Nevada

    USGS Publications Warehouse

    Plouff, Donald

    1992-01-01

    A residual isostatic gravity map (sheet 2) was prepared so that the regional effect of isostatic compensation present on the Bouguer gravity anomaly map (sheet 1) would be minimized. Isostatic corrections based on the Airy-Heiskanen system (Heiskanen and Vening Meinesz, 1958, p. 135-137) were estimated by using 3-minute topographic digitization and applying the method of Jachens and Roberts (1981). Parameters selected for the isostatic model were 25 km for the normal crustal thickness at sea level, 2.67 g/cm3 for the density of the crust, and 0.4 g/cm3 for the contrast in density between the crust and the upper mantle. These parameters were selected so that the isostatic residual gravity map would be consistent with isostatic residual gravity maps of the adjacent Walker Lake quadrangle (Plouff, 1987) and the state of Nevada (Saltus, 1988c).

  12. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.

  13. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  14. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

  15. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  16. The discovery of student experiences using the Frayer model map as a Tier 2 intervention in secondary science

    NASA Astrophysics Data System (ADS)

    Miller, Cory D.

    The purpose of this study was to discover the student experiences of using the Frayer model map as a Tier 2 intervention in science. As a response to the criticized discrepancy model and the mandates contained in NCLB and the IDEA, response to intervention (RtI) has been implemented in education to increase achievement for all students and to discover what students need further interventions. Based on Cronbach's (1957) aptitude X treatment interaction theory, RtI assumes that progress over time can be measured when interventions are applied. While RtI has been actively implemented in reading and math, it has not been implemented in science. Therefore, it was not known what the experiences of students are using the Frayer model map as a Tier 2 intervention to impact science achievement. The multiple case study used a qualitative methodology that included pre-intervention and post-intervention web-based surveys, field notes during observations, and student work that were collected during the course of the study. The population that was studied was seventh- and eighth-grade students considered at-risk and attend a Title I school in Florida. The sample of the studied population was purposively selected according to a set of criteria similar to Tier 2 selection in RtI. The research question was, "What are the experiences of middle grades students using the Frayer model map as an instructional intervention in science?" The answer to the research question was that the experiences of students using the Frayer model map as a Tier 2 intervention in secondary science can be described as participants perceived the Frayer model map as use as a tool to organize tasks and create meaning while they completed the work independently and with accuracy. Even though there were limitations to quantity of data, the research question was adequately answered. Overall, the study fills a gap in the literature related to RtI and science education.

  17. Hierarchical image-based rendering using texture mapping hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N

    1999-01-15

    Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less

  18. ShapeSelectForest: a new r package for modeling landsat time series

    Treesearch

    Mary Meyer; Xiyue Liao; Gretchen Moisen; Elizabeth Freeman

    2015-01-01

    We present a new R package called ShapeSelectForest recently posted to the Comprehensive R Archival Network. The package was developed to fit nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral...

  19. Selective Cutting Impact on Carbon Storage in Fremont-Winema National Forest, Oregon

    NASA Astrophysics Data System (ADS)

    Huybrechts, C.; Cleve, C. T.

    2004-12-01

    Management personnel of the Fremont-Winema National Forest in southern Oregon were interested in investigating how selective cutting or fuel load reduction treatments affect forest carbon sinks and as an ancillary product, fire risk. This study was constructed with the objective of providing this information to the forest administrators, as well as to satisfy a directive to study carbon management, a component of the 2004 NASA's Application Division Program Plan. During the summer of 2004, a request for decision support tools by the forest management was addressed by a NASA sponsored student-led, student-run internship group called DEVELOP. This full-time10-week program was designed to be an introduction to work done by earth scientists, professional business / client relationships and the facilities available at NASA Ames. Four college and graduate students from varying educational backgrounds designed the study and implementation plan. The team collected data for five consecutive days in Oregon throughout the Fremont-Winema forest and the surrounding terrain, consisting of soil sampling for underground carbon dynamics, fire model and vegetation map validation. The goal of the carbon management component of the project was to model current carbon levels, then to gauge the effect of fuel load reduction treatments. To study carbon dynamics, MODIS derived fraction photosynthetically active radiation (FPAR) maps, regional climate data, and Landsat 5 generated dominant vegetation species and land cover maps were used in conjunction with the NASA - Carnegie-Ames-Stanford-Approach (CASA) model. To address fire risk the dominant vegetation species map was used to estimate fuel load based on species biomass in conjunction with a mosaic of digital elevation models (DEMs) as components to the creation of an Anderson-inspired fuel map, a rate of spread in meters/minute map and a flame length map using ArcMap 9 and FlamMap. Fire risk results are to be viewed qualitatively as maps output spatial distribution of data rather then quantitative assessment of risk. For the first time ever, the resource managers at the Fremont-Winema forest will be taking into consideration the value of carbon as a resource in their decision making process for the 2005 Fremont-Winema forest management plan.

  20. Binational digital soils map of the Ambos Nogales watershed, southern Arizona and northern Sonora, Mexico

    USGS Publications Warehouse

    Norman, Laura

    2004-01-01

    We have prepared a digital map of soil parameters for the international Ambos Nogales watershed to use as input for selected soils-erosion models. The Ambos Nogales watershed in southern Arizona and northern Sonora, Mexico, contains the Nogales wash, a tributary of the Upper Santa Cruz River. The watershed covers an area of 235 km2, just under half of which is in Mexico. Preliminary investigations of potential erosion revealed a discrepancy in soils data and mapping across the United States-Mexican border due to issues including different mapping resolutions, incompatible formatting, and varying nomenclature and classification systems. To prepare a digital soils map appropriate for input to a soils-erosion model, the historical analog soils maps for Nogales, Ariz., were scanned and merged with the larger-scale digital soils data available for Nogales, Sonora, Mexico using a geographic information system.

  1. Development of a flood-warning system and flood-inundation mapping in Licking County, Ohio

    USGS Publications Warehouse

    Ostheimer, Chad J.

    2012-01-01

    Digital flood-inundation maps for selected reaches of South Fork Licking River, Raccoon Creek, North Fork Licking River, and the Licking River in Licking County, Ohio, were created by the U.S. Geological Survey (USGS), in cooperation with the Ohio Department of Transportation; U.S. Department of Transportation, Federal Highway Administration; Muskingum Watershed Conservancy District; U.S. Department of Agriculture, Natural Resources Conservation Service; and the City of Newark and Village of Granville, Ohio. The inundation maps depict estimates of the areal extent of flooding corresponding to water levels (stages) at the following USGS streamgages: South Fork Licking River at Heath, Ohio (03145173); Raccoon Creek below Wilson Street at Newark, Ohio (03145534); North Fork Licking River at East Main Street at Newark, Ohio (03146402); and Licking River near Newark, Ohio (03146500). The maps were provided to the National Weather Service (NWS) for incorporation into a Web-based flood-warning system that can be used in conjunction with NWS flood-forecast data to show areas of predicted flood inundation associated with forecasted flood-peak stages. As part of the flood-warning streamflow network, the USGS re-installed one streamgage on North Fork Licking River, and added three new streamgages, one each on North Fork Licking River, South Fork Licking River, and Raccoon Creek. Additionally, the USGS upgraded a lake-level gage on Buckeye Lake. Data from the streamgages and lake-level gage can be used by emergency-management personnel, in conjunction with the flood-inundation maps, to help determine a course of action when flooding is imminent. Flood profiles for selected reaches were prepared by calibrating steady-state step-backwater models to selected, established streamgage rating curves. The step-backwater models then were used to determine water-surface-elevation profiles for up to 10 flood stages at a streamgage with corresponding streamflows ranging from approximately the 50 to 0.2-percent chance annual-exceedance probabilities for each of the 4 streamgages that correspond to the flood-inundation maps. The computed flood profiles were used in combination with digital elevation data to delineate flood-inundation areas. Maps of Licking County showing flood-inundation areas overlain on digital orthophotographs are presented for the selected floods. The USGS also developed an unsteady-flow model for a reach of South Fork Licking River for use by the NWS to enhance their ability to provide advanced flood warning in the region north of Buckeye Lake, Ohio. The unsteady-flow model was calibrated based on data from four flooding events that occurred from June 2008 to December 2011. Model calibration was approximate due to the fact that there were unmeasured inflows to the river that were not able to be considered during the calibration. Information on unmeasured inflow derived from NWS hydrologic models and additional flood-event data could enable the NWS to further refine the unsteady-flow model.

  2. A tool for teaching three-dimensional dermatomes combined with distribution of cutaneous nerves on the limbs.

    PubMed

    Kooloos, Jan G M; Vorstenbosch, Marc A T M

    2013-01-01

    A teaching tool that facilitates student understanding of a three-dimensional (3D) integration of dermatomes with peripheral cutaneous nerve field distributions is described. This model is inspired by the confusion in novice learners between dermatome maps and nerve field distribution maps. This confusion leads to the misconception that these two distribution maps fully overlap, and may stem from three sources: (1) the differences in dermatome maps in anatomical textbooks, (2) the limited views in the figures of dermatome maps and cutaneous nerve field maps, hampering the acquisition of a 3D picture, and (3) the lack of figures showing both maps together. To clarify this concept, the learning process can be facilitated by transforming the 2D drawings in textbooks to a 3D hands-on model and by merging the information from the separate maps. Commercially available models were covered with white cotton pantyhose, and borders between dermatomes were marked using the drawings from the students' required study material. Distribution maps of selected peripheral nerves were cut out from color transparencies. Both the model and the cut-out nerve fields were then at the students' disposal during a laboratory exercise. The students were instructed to affix the transparencies in the right place according to the textbook's figures. This model facilitates integrating the spatial relationships of the two types of nerve distributions. By highlighting the spatial relationship and aiming to provoke student enthusiasm, this model follows the advantages of other low-fidelity models. © 2013 American Association of Anatomists.

  3. A radiation hybrid map of the European sea bass (Dicentrarchus labrax) based on 1581 markers: Synteny analysis with model fish genomes.

    PubMed

    Guyon, Richard; Senger, Fabrice; Rakotomanga, Michaelle; Sadequi, Naoual; Volckaert, Filip A M; Hitte, Christophe; Galibert, Francis

    2010-10-01

    The selective breeding of fish for aquaculture purposes requires the understanding of the genetic basis of traits such as growth, behaviour, resistance to pathogens and sex determinism. Access to well-developed genomic resources is a prerequisite to improve the knowledge of these traits. Having this aim in mind, a radiation hybrid (RH) panel of European sea bass (Dicentrarchus labrax) was constructed from splenocytes irradiated at 3000 rad, allowing the construction of a 1581 marker RH map. A total of 1440 gene markers providing ~4400 anchors with the genomes of three-spined stickleback, medaka, pufferfish and zebrafish, helped establish synteny relationships with these model species. The identification of Conserved Segments Ordered (CSO) between sea bass and model species allows the anticipation of the position of any sea bass gene from its location in model genomes. Synteny relationships between sea bass and gilthead seabream were addressed by mapping 37 orthologous markers. The sea bass genetic linkage map was integrated in the RH map through the mapping of 141 microsatellites. We are thus able to present the first complete gene map of sea bass. It will facilitate linkage studies and the identification of candidate genes and Quantitative Trait Loci (QTL). The RH map further positions sea bass as a genetic and evolutionary model of Perciformes and supports their ongoing aquaculture expansion. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Streamflow distribution maps for the Cannon River drainage basin, southeast Minnesota, and the St. Louis River drainage basin, northeast Minnesota

    USGS Publications Warehouse

    Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.

    2017-12-27

    Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.

  5. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  6. MHC class I-associated peptides derive from selective regions of the human genome.

    PubMed

    Pearson, Hillary; Daouda, Tariq; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Mader, Sylvie; Lemieux, Sébastien; Thibault, Pierre; Perreault, Claude

    2016-12-01

    MHC class I-associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology.

  7. MHC class I–associated peptides derive from selective regions of the human genome

    PubMed Central

    Pearson, Hillary; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Thibault, Pierre

    2016-01-01

    MHC class I–associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology. PMID:27841757

  8. Neural network models for spatial data mining, map production, and cortical direction selectivity

    NASA Astrophysics Data System (ADS)

    Parsons, Olga

    A family of ARTMAP neural networks for incremental supervised learning has been developed over the last decade. The Sensor Exploitation Group of MIT Lincoln Laboratory (LL) has incorporated an early version of this network as the recognition engine of a hierarchical system for fusion and data mining of multiple registered geospatial images. The LL system has been successfully fielded, but it is limited to target vs. non-target identifications and does not produce whole maps. This dissertation expands the capabilities of the LL system so that it learns to identify arbitrarily many target classes at once and can thus produce a whole map. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of a consistent procedure and a benchmark testbed has permitted the evaluation of candidate recognition networks as well as pre- and post-processing and feature extraction options. The resulting default ARTMAP network and mapping methodology set a standard for a variety of related mapping problems and application domains. The second part of the dissertation investigates the development of cortical direction selectivity. The possible role of visual experience and oculomotor behavior in the maturation of cells in the primary visual cortex is studied. The responses of neurons in the thalamus and cortex of the cat are modeled when natural scenes are scanned by several types of eye movements. Inspired by the Hebbian-like synaptic plasticity, which is based upon correlations between cell activations, the second-order statistical structure of thalamo-cortical activity is examined. In the simulations, patterns of neural activity that lead to a correct refinement of cell responses are observed during visual fixation, when small ocular movements occur, but are not observed in the presence of large saccades. Simulations also replicate experiments in which kittens are reared under stroboscopic illumination. The abnormal fixational eye movements of these cats may account for the puzzling finding of a specific loss of cortical direction selectivity but preservation of orientation selectivity. This work indicates that the oculomotor behavior of visual fixation may play an important role in the refinement of cell response selectivity.

  9. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    PubMed

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

  10. Predicting a roadkill hotspots based on spatial distribution of Korean water deer (Hydropotes inermis argyropus) using Maxent model in South Korea Expressway : In Case of Cheongju-Sangju Expressway

    NASA Astrophysics Data System (ADS)

    Park, Hyomin; Lee, Sangdon

    2016-04-01

    Road construction has direct and indirect effects on ecosystems. Especially wildlife-vehicle conflicts (roadkills) caused by roads are a considerable threat for population of many species. This study aims to identify the effects of topographic characteristics and spatial distribution of Korean water deer (Hydropotes inermis). Korean water deer is indigenous and native species in Korea that listed LC (least concern) by IUCN redlist categories. Korean water deer population is growing every year occupying for most of roadkills (>70%) in Korean express highway. In order to predict a distribution of the Korean water deer, we selected factors that most affected water deer's habitat. Major habitats of waterdeer are known as agricultural area, forest area and water. Based on this result, eight factors were selected (land cover map, vegetation map, age class of forest, diameter class of tree, population, slope of study site, elevation of study site, distance of river), and made a thematic map by using GIS program (ESRI, Arc GIS 10.3.1 ver.). To analyze the affected factors of waterdeer distribution, GPS data and thematic map of study area were entered into Maxent model (Maxent 3.3.3.k.). Results of analysis were verified by the AUC (Area Unit Curve) of ROC (Receiver Operating Characteristic). The ROC curve used the sensitivity and specificity as a reference for determining the prediction efficiency of the model and AUC area of ROC curve was higher prediction efficiency closer to '1.' Selecting factors that affected the distribution of waterdeer were land cover map, diameter class of tree and elevation of study site. The value of AUC was 0.623. To predict the water deer's roadkills hot spot on Cheongju-Sangju Expressway, the thematic map was prepared based on GPS data of roadkill spots. As a result, the topographic factors that affected waterdeer roadkill were land cover map, actual vegetation map and age class of forest and the value of AUC was 0.854. Through this study, we could identify the site and hot spots that water deer frequently expected to use based on quantitative data on the spatial and topographic factors. Therefore, we can suggest ways to minimize roadkills by selecting the hot spots and by suggesting construction of eco-corridors. This study will significantly enhance human-wildlife conflicts by identifying key habitat areas for wild mammals.

  11. Predictive Modeling and Mapping of Fish Distributions in Small Streams of the Canadian Rocky Mountain Foothills

    NASA Astrophysics Data System (ADS)

    McCleary, R. J.; Hassan, M. A.

    2006-12-01

    An automated procedure was developed to model spatial fish distributions within small streams in the Foothills of Alberta. Native fish populations and their habitats are susceptible to impacts arising from both industrial forestry and rapid development of petroleum resources in the region. Knowledge of fish distributions and the effects of industrial activities on their habitats is required to help conserve native fish populations. Resource selection function (RSF) models were used to explain presence/absence of fish in small streams. Target species were bull trout, rainbow trout and non-native brook trout. Using GIS, the drainage network was divided into reaches with uniform slope and drainage area and then polygons for each reach were created. Predictor variables described stream size, stream energy, climate and land-use. We identified a set of candidate models and selected the best model using a standard Akaike Information Criteria approach. The best models were validated with two external data sets. Drainage area and basin slope parameters were included in all best models. This finding emphasizes the importance of controlling for the energy dimension at the basin scale in investigations into the effects of land-use on aquatic resources in this transitional landscape between the mountains and plains. The best model for bull trout indicated a relation between the presence of artificial migration barriers in downstream areas and the extirpation of the species from headwater reaches. We produced reach-scale maps by species and summarized this information within all small catchments across the 12,000 km2 study area. These maps had included three categories based on predicted probability of capture for individual reaches. The high probability category had a 78 percent accuracy for correctly predicting both fish present and fish not-present reaches. Basin scale maps highlight specific watersheds likely to support both native bull trout and invasive brook trout, while reach-scale maps indicate specific reaches where interactions between these two species are likely to occur. With regional calibration, this automated modeling and mapping procedure could apply in headwater catchments throughout the Rocky Mountain Foothills and other areas where sporadic waterfalls or other natural migration barriers are not an important feature limiting fish distribution.

  12. A new capture fraction method to map how pumpage affects surface water flow.

    PubMed

    Leake, Stanley A; Reeves, Howard W; Dickinson, Jesse E

    2010-01-01

    All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan.

  13. An optimal strategy for functional mapping of dynamic trait loci.

    PubMed

    Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling

    2010-02-01

    As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.

  14. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.

  15. Spatio-temporal Bayesian model selection for disease mapping

    PubMed Central

    Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K

    2016-01-01

    Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156

  16. Evaluating the Variations in the Flood Susceptibility Maps Accuracies due to the Alterations in the Type and Extent of the Flood Inventory

    NASA Astrophysics Data System (ADS)

    Tehrany, M. Sh.; Jones, S.

    2017-10-01

    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.

  17. The Lunar Mapping and Modeling Project

    NASA Technical Reports Server (NTRS)

    Nall, M.; French, R.; Noble, S.; Muery, K.

    2010-01-01

    The Lunar Mapping and Modeling Project (LMMP) is managing a suite of lunar mapping and modeling tools and data products that support lunar exploration activities, including the planning, de-sign, development, test, and operations associated with crewed and/or robotic operations on the lunar surface. Although the project was initiated primarily to serve the needs of the Constellation program, it is equally suited for supporting landing site selection and planning for a variety of robotic missions, including NASA science and/or human precursor missions and commercial missions such as those planned by the Google Lunar X-Prize participants. In addition, LMMP should prove to be a convenient and useful tool for scientific analysis and for education and public out-reach (E/PO) activities.

  18. Bifurcation structures of a cobweb model with memory and competing technologies

    NASA Astrophysics Data System (ADS)

    Agliari, Anna; Naimzada, Ahmad; Pecora, Nicolò

    2018-05-01

    In this paper we study a simple model based on the cobweb demand-supply framework with costly innovators and free imitators. The evolutionary selection between technologies depends on a performance measure which is related to the degree of memory. The resulting dynamics is described by a two-dimensional map. The map has a fixed point which may lose stability either via supercritical Neimark-Sacker bifurcation or flip bifurcation and several multistability situations exist. We describe some sequences of global bifurcations involving attracting and repelling closed invariant curves. These bifurcations, characterized by the creation of homoclinic connections or homoclinic tangles, are described through several numerical simulations. Particular bifurcation phenomena are also observed when the parameters are selected inside a periodicity region.

  19. Interpreting fMRI data: maps, modules and dimensions

    PubMed Central

    Op de Beeck, Hans P.; Haushofer, Johannes; Kanwisher, Nancy G.

    2009-01-01

    Neuroimaging research over the past decade has revealed a detailed picture of the functional organization of the human brain. Here we focus on two fundamental questions that are raised by the detailed mapping of sensory and cognitive functions and illustrate these questions with findings from the object-vision pathway. First, are functionally specific regions that are located close together best understood as distinct cortical modules or as parts of a larger-scale cortical map? Second, what functional properties define each cortical map or module? We propose a model in which overlapping continuous maps of simple features give rise to discrete modules that are selective for complex stimuli. PMID:18200027

  20. Use of ocean color scanner data in water quality mapping

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1981-01-01

    Remotely sensed data, in combination with in situ data, are used in assessing water quality parameters within the San Francisco Bay-Delta. The parameters include suspended solids, chlorophyll, and turbidity. Regression models are developed between each of the water quality parameter measurements and the Ocean Color Scanner (OCS) data. The models are then extended to the entire study area for mapping water quality parameters. The results include a series of color-coded maps, each pertaining to one of the water quality parameters, and the statistical analysis of the OCS data and regression models. It is found that concurrently collected OCS data and surface truth measurements are highly useful in mapping the selected water quality parameters and locating areas having relatively high biological activity. In addition, it is found to be virtually impossible, at least within this test site, to locate such areas on U-2 color and color-infrared photography.

  1. Selective 4D modelling framework for spatial-temporal land information management system

    NASA Astrophysics Data System (ADS)

    Doulamis, Anastasios; Soile, Sofia; Doulamis, Nikolaos; Chrisouli, Christina; Grammalidis, Nikos; Dimitropoulos, Kosmas; Manesis, Charalambos; Potsiou, Chryssy; Ioannidis, Charalabos

    2015-06-01

    This paper introduces a predictive (selective) 4D modelling framework where only the spatial 3D differences are modelled at the forthcoming time instances, while regions of no significant spatial-temporal alterations remain intact. To accomplish this, initially spatial-temporal analysis is applied between 3D digital models captured at different time instances. So, the creation of dynamic change history maps is made. Change history maps indicate spatial probabilities of regions needed further 3D modelling at forthcoming instances. Thus, change history maps are good examples for a predictive assessment, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 4D Land Information Management System (LIMS) is implemented using open interoperable standards based on the CityGML framework. CityGML allows the description of the semantic metadata information and the rights of the land resources. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 4D LIMS digital parcels and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. An application is made to detect the change through time of a 3D block of plots in an urban area of Athens, Greece. Starting with an accurate 3D model of the buildings in 1983, a change history map is created using automated dense image matching on aerial photos of 2010. For both time instances meshes are created and through their comparison the changes are detected.

  2. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria

    2016-04-01

    The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.

  3. Application of Landsat 5-TM and GIS data to elk habitat studies in northern Idaho

    NASA Astrophysics Data System (ADS)

    Hayes, Stephen Gordon

    1999-12-01

    An extensive geographic information system (GIS) database and a large radiotelemetry sample of elk (n = 153) were used to study habitat use and selection differences between cow and bull elk (Cervus elaphus) in the Coeur d'Alene Mountains of Idaho. Significant sex differences in 40 ha area use, and interactive effects of sex and season on selection of 40 ha areas from home ranges were found. In all seasons, bulls used habitats with more closed canopy forest, more hiding cover, and less shrub and graminoid cover, than cows. Cows selected areas with shrub and graminoid cover in winter and avoided areas with closed canopy forest and hiding cover in winter and summer seasons. Both sexes selected 40 ha areas of unfragmented hiding cover and closed canopy forest during the hunting season. Bulls also avoided areas with high open road densities during the rut and hunting season. These results support present elk management recommendations, but our observations of sexual segregation provide biologists with an opportunity to refine habitat management plans to target bulls and cows specifically. Furthermore, the results demonstrate that hiding cover and canopy closure can be accurately estimated from Landsat 5-TM imagery and GIS soil data at a scale and resolution to which elk respond. As a result, our habitat mapping methods can be applied to large areas of private and public land with consistent, cost-efficient results. Non-Lambertian correction models of Landsat 5-TM imagery were compared to an uncorrected image to determine if topographic normalization increased the accuracy of elk habitat maps of forest structure in northern Idaho. The non-Lambertian models produced elk habitat maps with overall and kappa statistic accuracies as much as 21.3% higher (p < 0.0192) than the uncorrected image. Log-linear models and power analysis were used to study the dependence of commission and omission error rates on topographic normalization, vegetation type, and solar incidence angle. Examination of Type I and Type II likelihood ratio test error rates indicated that topographic normalization increased accuracy in sapling/pole closed forest, clearcuts, open forest, and shrubfields. Non-Lambertian models that allowed the Minnaert constant (k) to vary as a function of solar incidence and vegetation type offered no improvement in accuracy over the non-Lambertian model with k estimated for each TM band. The bias of habitat use proportion estimates, derived from the most accurate map, was quantified and the applicability of the non-Lambertian model to elk habitat mapping is discussed.

  4. Dynamical properties of maps fitted to data in the noise-free limit

    PubMed Central

    Lindström, Torsten

    2013-01-01

    We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079

  5. The influence of uncertain map features on risk beliefs and perceived ambiguity for maps of modeled cancer risk from air pollution

    PubMed Central

    Myers, Jeffrey D.

    2012-01-01

    Maps are often used to convey information generated by models, for example, modeled cancer risk from air pollution. The concrete nature of images, such as maps, may convey more certainty than warranted for modeled information. Three map features were selected to communicate the uncertainty of modeled cancer risk: (a) map contours appeared in or out of focus, (b) one or three colors were used, and (c) a verbal-relative or numeric risk expression was used in the legend. Study aims were to assess how these features influenced risk beliefs and the ambiguity of risk beliefs at four assigned map locations that varied by risk level. We applied an integrated conceptual framework to conduct this full factorial experiment with 32 maps that varied by the three dichotomous features and four risk levels; 826 university students participated. Data was analyzed using structural equation modeling. Unfocused contours and the verbal-relative risk expression generated more ambiguity than their counterparts. Focused contours generated stronger risk beliefs for higher risk levels and weaker beliefs for lower risk levels. Number of colors had minimal influence. The magnitude of risk level, conveyed using incrementally darker shading, had a substantial dose-response influence on the strength of risk beliefs. Personal characteristics of prior beliefs and numeracy also had substantial influences. Bottom-up and top-down information processing suggest why iconic visual features of incremental shading and contour focus had the strongest visual influences on risk beliefs and ambiguity. Variations in contour focus and risk expression show promise for fostering appropriate levels of ambiguity. PMID:22985196

  6. The EarthServer Geology Service: web coverage services for geosciences

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2014-05-01

    The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed imagery and 3D models, the geology service is also delivering DTM coverages which can be viewed in the 3D client in conjunction with both imagery and models. The service is accessible through a web GUI which allows the imagery to be viewed against a range of background maps and DTMs, and in the 3D client; spatial selection to be carried out graphically; the results of image enhancement to be displayed; and selected data to be downloaded. The GUI also provides access to the Glasgow model in the 3D client, as well as tutorial material. In the final year of the project it is intended to increase the volume of data to 20Tb and enhance the WCPS processing, including depth and thickness querying of 3D models. We have also investigated the use of GeoSciML, developed to describe and interchange the information on geological maps, to describe model surface coverages. EarthServer is developing a combined WCPS and xQuery query language, and we will investigate applying this to the GeoSciML described surfaces to answer questions such as 'find all units with a predominant sand lithology within 25m of the surface'.

  7. Modeling eye movements in visual agnosia with a saliency map approach: bottom-up guidance or top-down strategy?

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2011-08-01

    Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  9. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users

    PubMed Central

    Scheperle, Rachel A.; Abbas, Paul J.

    2014-01-01

    Objectives The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor. Conclusions The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. PMID:25658746

  10. Embedded Multiprocessor Technology for VHSIC Insertion

    NASA Technical Reports Server (NTRS)

    Hayes, Paul J.

    1990-01-01

    Viewgraphs on embedded multiprocessor technology for VHSIC insertion are presented. The objective was to develop multiprocessor system technology providing user-selectable fault tolerance, increased throughput, and ease of application representation for concurrent operation. The approach was to develop graph management mapping theory for proper performance, model multiprocessor performance, and demonstrate performance in selected hardware systems.

  11. NASA's Solar System Treks: Online Portals for Planetary Mapping and Modeling

    NASA Technical Reports Server (NTRS)

    Day, Brian

    2017-01-01

    NASA's Solar System Treks are a suite of web-based of lunar and planetary mapping and modeling portals providing interactive visualization and analysis tools enabling mission planners, planetary scientists, students, and the general public to access mapped lunar data products from past and current missions for the Moon, Mars, Vesta, and more. New portals for additional planetary bodies are being planned. This presentation will recap significant enhancements to these toolsets during the past year and look ahead to future features and releases. Moon Trek is a new portal replacing its predecessor, the Lunar Mapping and Modeling Portal (LMMP), that significantly upgrades and builds upon the capabilities of LMMP. It features greatly improved navigation, 3D visualization, fly-overs, performance, and reliability. Additional data products and tools continue to be added. These include both generalized products as well as polar data products specifically targeting potential sites for NASA's Resource Prospector mission as well as for missions being planned by NASA's international partners. The latest release of Mars Trek includes new tools and data products requested by NASA's Planetary Science Division to support site selection and analysis for Mars Human Landing Exploration Zone Sites. Also being given very high priority by NASA Headquarters is Mars Trek's use as a means to directly involve the public in upcoming missions, letting them explore the areas the agency is focusing upon, understand what makes these sites so fascinating, follow the selection process, and get caught up in the excitement of exploring Mars. Phobos Trek, the latest effort in the Solar System Treks suite, is being developed in coordination with the International Phobos/Deimos Landing Site Working Group, with landing site selection and analysis for JAXA's MMX (Martian Moons eXploration) mission as a primary driver.

  12. NASA's Solar System Treks: Online Portals for Planetary Mapping and Modeling

    NASA Astrophysics Data System (ADS)

    Day, B. H.; Law, E.

    2017-12-01

    NASA's Solar System Treks are a suite of web-based of lunar and planetary mapping and modeling portals providing interactive visualization and analysis tools enabling mission planners, planetary scientists, students, and the general public to access mapped lunar data products from past and current missions for the Moon, Mars, Vesta, and more. New portals for additional planetary bodies are being planned. This presentation will recap significant enhancements to these toolsets during the past year and look ahead to future features and releases. Moon Trek is a new portal replacing its predecessor, the Lunar Mapping and Modeling Portal (LMMP), that significantly upgrades and builds upon the capabilities of LMMP. It features greatly improved navigation, 3D visualization, fly-overs, performance, and reliability. Additional data products and tools continue to be added. These include both generalized products as well as polar data products specifically targeting potential sites for NASA's Resource Prospector mission as well as for missions being planned by NASA's international partners. The latest release of Mars Trek includes new tools and data products requested by NASA's Planetary Science Division to support site selection and analysis for Mars Human Landing Exploration Zone Sites. Also being given very high priority by NASA Headquarters is Mars Trek's use as a means to directly involve the public in upcoming missions, letting them explore the areas the agency is focusing upon, understand what makes these sites so fascinating, follow the selection process, and get caught up in the excitement of exploring Mars. Phobos Trek, the latest effort in the Solar System Treks suite, is being developed in coordination with the International Phobos/Deimos Landing Site Working Group, with landing site selection and analysis for JAXA's MMX mission as a primary driver.

  13. Evaluation of parallel reduction strategies for fusion of sensory information from a robot team

    NASA Astrophysics Data System (ADS)

    Lyons, Damian M.; Leroy, Joseph

    2015-05-01

    The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even with mutual views removed, the stereo data from a team of robots can quickly swamp a WiFi connection. This paper proposes and evaluates a communication and fusion approach based on the parallel reduction operation, where data is combined in a series of steps of increasing subsets of the team. Eight different strategies for selecting the subsets are evaluated for bandwidth requirements using three robot missions, each carried out with teams of four Pioneer 3-AT robots. Our results indicate that selecting groups to combine based on similar pose but distant location yields the best results.

  14. A new capture fraction method to map how pumpage affects surface water flow

    USGS Publications Warehouse

    Leake, S.A.; Reeves, H.W.; Dickinson, J.E.

    2010-01-01

    All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.

  15. Hyperspectral image visualization based on a human visual model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.

    2008-02-01

    Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.

  16. Selection of suitable fertilizer draw solute for a novel fertilizer-drawn forward osmosis-anaerobic membrane bioreactor hybrid system.

    PubMed

    Kim, Youngjin; Chekli, Laura; Shim, Wang-Geun; Phuntsho, Sherub; Li, Sheng; Ghaffour, Noreddine; Leiknes, TorOve; Shon, Ho Kyong

    2016-06-01

    In this study, a protocol for selecting suitable fertilizer draw solute for anaerobic fertilizer-drawn forward osmosis membrane bioreactor (AnFDFOMBR) was proposed. Among eleven commercial fertilizer candidates, six fertilizers were screened further for their FO performance tests and evaluated in terms of water flux and reverse salt flux. Using selected fertilizers, bio-methane potential experiments were conducted to examine the effect of fertilizers on anaerobic activity due to reverse diffusion. Mono-ammonium phosphate (MAP) showed the highest biogas production while other fertilizers exhibited an inhibition effect on anaerobic activity with solute accumulation. Salt accumulation in the bioreactor was also simulated using mass balance simulation models. Results showed that ammonium sulfate and MAP were the most appropriate for AnFDFOMBR since they demonstrated less salt accumulation, relatively higher water flux, and higher dilution capacity of draw solution. Given toxicity of sulfate to anaerobic microorganisms, MAP appears to be the most suitable draw solution for AnFDFOMBR. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Development of a transformation model to derive general population-based utility: Mapping the pruritus-visual analog scale (VAS) to the EQ-5D utility.

    PubMed

    Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung

    2017-08-01

    Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.

  18. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.

    2015-06-01

    SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.

  19. The time course of saccadic decision making: dynamic field theory.

    PubMed

    Wilimzig, Claudia; Schneider, Stefan; Schöner, Gregor

    2006-10-01

    Making a saccadic eye movement involves two decisions, the decision to initiate the saccade and the selection of the visual target of the saccade. Here we provide a theoretical account for the time-courses of these two processes, whose instabilities are the basis of decision making. We show how the cross-over from spatial averaging for fast saccades to selection for slow saccades arises from the balance between excitatory and inhibitory processes. Initiating a saccade involves overcoming fixation, as can be observed in the countermanding paradigm, which we model accounting both for the temporal evolution of the suppression probability and its dependence on fixation activity. The interaction between the two forms of decision making is demonstrated by predicting how the cross-over from averaging to selection depends on the fixation stimulus in gap-step-overlap paradigms. We discuss how the activation dynamics of our model may be mapped onto neuronal structures including the motor map and the fixation cells in superior colliculus.

  20. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  1. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  2. Application of Geographic Information System (GIS) to Model the Hydrocarbon Migration: Case Study from North-East Malay Basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Rudini; Nasir Matori, Abd; Talib, Jasmi Ab; Balogun, Abdul-Lateef

    2018-03-01

    The purpose of this study is to model the migration of hydrocarbon using Geographic Information System (GIS). Understanding hydrocarbon migration is important since it can mean the difference between success and failure in oil and gas exploration project. The hydrocarbon migration modeling using geophysical method is still not accurate due to the limitations of available data. In recent years, GIS has emerged as a powerful tool for subsurface mapping and analysis. Recent studies have been carried out about the abilities of GIS to model hydrocarbon migration. Recent advances in GIS support the establishment and monitoring of prediction hydrocarbon migration. The concept, model, and calculation are based on the current geological situation. The spatial data of hydrocarbon reservoirs is determined by its geometry of lithology and geophysical attributes. Top of Group E horizon of north-east Malay basin was selected as the study area due to the occurrence of hydrocarbon migration. Spatial data and attributes data such as seismic data, wells log data and lithology were acquired and processed. Digital Elevation Model (DEM) was constructed from the selected horizon as a result of seismic interpretation using the Petrel software. Furthermore, DEM was processed in ArcGIS as a base map to shown hydrocarbon migration in north-east Malay Basin. Finally, all the data layers were overlaid to produce a map of hydrocarbon migration. A good data was imported to verify the model is correct.

  3. Example-based learning: comparing the effects of additionally providing three different integrative learning activities on physiotherapy intervention knowledge.

    PubMed

    Dyer, Joseph-Omer; Hudon, Anne; Montpetit-Tourangeau, Katherine; Charlin, Bernard; Mamede, Sílvia; van Gog, Tamara

    2015-03-07

    Example-based learning using worked examples can foster clinical reasoning. Worked examples are instructional tools that learners can use to study the steps needed to solve a problem. Studying worked examples paired with completion examples promotes acquisition of problem-solving skills more than studying worked examples alone. Completion examples are worked examples in which some of the solution steps remain unsolved for learners to complete. Providing learners engaged in example-based learning with self-explanation prompts has been shown to foster increased meaningful learning compared to providing no self-explanation prompts. Concept mapping and concept map study are other instructional activities known to promote meaningful learning. This study compares the effects of self-explaining, completing a concept map and studying a concept map on conceptual knowledge and problem-solving skills among novice learners engaged in example-based learning. Ninety-one physiotherapy students were randomized into three conditions. They performed a pre-test and a post-test to evaluate their gains in conceptual knowledge and problem-solving skills (transfer performance) in intervention selection. They studied three pairs of worked/completion examples in a digital learning environment. Worked examples consisted of a written reasoning process for selecting an optimal physiotherapy intervention for a patient. The completion examples were partially worked out, with the last few problem-solving steps left blank for students to complete. The students then had to engage in additional self-explanation, concept map completion or model concept map study in order to synthesize and deepen their knowledge of the key concepts and problem-solving steps. Pre-test performance did not differ among conditions. Post-test conceptual knowledge was higher (P < .001) in the concept map study condition (68.8 ± 21.8%) compared to the concept map completion (52.8 ± 17.0%) and self-explanation (52.2 ± 21.7%) conditions. Post-test problem-solving performance was higher (P < .05) in the self-explanation (63.2 ± 16.0%) condition compared to the concept map study (53.3 ± 16.4%) and concept map completion (51.0 ± 13.6%) conditions. Students in the self-explanation condition also invested less mental effort in the post-test. Studying model concept maps led to greater conceptual knowledge, whereas self-explanation led to higher transfer performance. Self-explanation and concept map study can be combined with worked example and completion example strategies to foster intervention selection.

  4. Preliminary metallogenic belt and mineral deposit maps for northeast Asia

    USGS Publications Warehouse

    Obolenskiy, Alexander A.; Rodionov, Sergey M.; Dejidmaa, Gunchin; Gerel, Ochir; Hwang, Duk-Hwan; Distanov, Elimir G.; Badarch, Gombosuren; Khanchuk, Alexander I.; Ogasawara, Masatsugu; Nokleberg, Warren J.; Parfenov, Leonid M.; Prokopiev, Andrei V.; Seminskiy, Zhan V.; Smelov, Alexander P.; Yan, Hongquan; Birul'kin, Gennandiy V.; Davydov, Yuriy V.V.; Fridovskiy, Valeriy Yu.; Gamyanin, Gennandiy N.; Kostin, Alexei V.; Letunov, Sergey A.; Li, Xujun; Nikitin, Valeriy M.; Sotnikov, Sadahisa; Sudo, Vitaly I.; Spiridonov, Alexander V.; Stepanov, Vitaly A.; Sun, Fengyue; Sun, Jiapeng; Sun, Weizhi; Supletsov, Valeriy M.; Timofeev, Vladimir F.; Tyan, Oleg A.; Vetluzhskikh, Valeriy G.; Wakita, Koji; Yakovlev, Yakov V.; Zorina, Lydia M.

    2003-01-01

    The metallogenic belts and locations of major mineral deposits of Northeast Asia are portrayed on Sheets 1-4. Sheet 1 portrays the location of significant lode deposits and placer districts at a scale of 1:7,500,000. Sheets 2-4 portray the metallogenic belts of the region in a series of 12 time-slices from the Archean through the Quaternary at a scale of 1:15,000,000. For all four map sheets, a generalized geodynamics base map, derived from a more detailed map by Parfenov and others (2003), is used as an underlay for the metallogenic belt maps. This geodynamics map underlay permits depicts the major host geologic units and structures that host metallogenic belts. Four tables are included in this report. A hierarchial ranking of mineral deposit models is listed in Table 1. And summary features of lode deposits, placer districts, and metallogenic belts are described in Tables 2, 3, and 4, respectively. The metallogenic belts for Northeast Asia are synthesized, compiled, described, and interpreted with the use of modern concepts of plate tectonics, analysis of terranes and overlap assemblages, and synthesis of mineral deposit models. The data supporting the compilation are: (1) comprehensive descriptions of mineral deposits; (2) compilation and synthesis of a regional geodynamics map the region at 5 million scale with detailed explanations and cited references; and (3) compilation and synthesis of metallogenic belt maps at 15 million scale with detailed explanations and cited references. These studies are part of a major international collaborative study of the Mineral Resources, Metallogenesis, and Tectonics of Northeast Asia that is being conducted from 1997 through 2002 by geologists from earth science agencies and universities in Russia, Mongolia, Northeastern China, South Korea, Japan, and the USA. Companion studies and previous publications are: (1) a detailed geodynamics map of Northeast Asia (Parfenov and 2003); (2) a compilation of major mineral deposit models (Rodionov and Nokleberg, 2000; Rodionov and others, 2000; Obolenskiy and others, 2003); and (3) a database on significant metalliferous and selected nonmetalliferous lode deposits, and selected placer districts (Ariunbileg and others, 2003).

  5. A Tool for Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto

    2014-05-01

    Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore our landslide susceptibility map was adjusted to further reduce the susceptibility near each road based on the road level (primary, secondary, tertiary). For each model run, we superimposed the spatial location of landslide drops with the road network, and recorded the number, size and location of road blockages recorded, along with landslides within 50 and 100 m of the different road levels. Network analysis tools available in GRASS GIS were also applied to measure the impact upon the road network in terms of connectivity. The model was performed 100 times in a Monte-Carlo simulation for each region. Initial results show reasonable agreement between model output and the observed landslide inventories in terms of the number of road blockages. In Collazzone (length of road network = 153 km, landslide density = 5.2 landslides km-2), the median number of modelled road blockages over 100 model runs was 5 (±2.5 standard deviation) compared to the mapped inventory observed number of 5 road blockages. In Northridge (length of road network = 780 km, landslide density = 8.7 landslides km-2), the median number of modelled road blockages over 100 model runs was 108 (±17.2 standard deviation) compared to the mapped inventory observed number of 48 road blockages. As we progress with model development, we believe this semi-stochastic modelling approach will potentially aid civil protection agencies to explore different scenarios of road network potential damage as the result of different magnitude landslide triggering event scenarios.

  6. Exploring selection and recruitment processes for newly qualified nurses: a sequential-explanatory mixed-method study.

    PubMed

    Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda

    2015-01-01

    To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.

  7. Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data.

    PubMed

    Schulz, Hans Martin; Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2017-01-01

    Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as "MCF" or "non-MCF". This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest's location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF.

  8. Hierarchical Kohonenen net for anomaly detection in network security.

    PubMed

    Sarasamma, Suseela T; Zhu, Qiuming A; Huff, Julie

    2005-04-01

    A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.

  9. Dynamic map labeling.

    PubMed

    Been, Ken; Daiches, Eli; Yap, Chee

    2006-01-01

    We address the problem of filtering, selecting and placing labels on a dynamic map, which is characterized by continuous zooming and panning capabilities. This consists of two interrelated issues. The first is to avoid label popping and other artifacts that cause confusion and interrupt navigation, and the second is to label at interactive speed. In most formulations the static map labeling problem is NP-hard, and a fast approximation might have O(nlogn) complexity. Even this is too slow during interaction, when the number of labels shown can be several orders of magnitude less than the number in the map. In this paper we introduce a set of desiderata for "consistent" dynamic map labeling, which has qualities desirable for navigation. We develop a new framework for dynamic labeling that achieves the desiderata and allows for fast interactive display by moving all of the selection and placement decisions into the preprocessing phase. This framework is general enough to accommodate a variety of selection and placement algorithms. It does not appear possible to achieve our desiderata using previous frameworks. Prior to this paper, there were no formal models of dynamic maps or of dynamic labels; our paper introduces both. We formulate a general optimization problem for dynamic map labeling and give a solution to a simple version of the problem. The simple version is based on label priorities and a versatile and intuitive class of dynamic label placements we call "invariant point placements". Despite these restrictions, our approach gives a useful and practical solution. Our implementation is incorporated into the G-Vis system which is a full-detail dynamic map of the continental USA. This demo is available through any browser.

  10. Soil-geographical regionalization as a basis for digital soil mapping: Karelia case study

    NASA Astrophysics Data System (ADS)

    Krasilnikov, P.; Sidorova, V.; Dubrovina, I.

    2010-12-01

    Recent development of digital soil mapping (DSM) allowed improving significantly the quality of soil maps. We tried to make a set of empirical models for the territory of Karelia, a republic at the North-East of the European territory of Russian Federation. This territory was selected for the pilot study for DSM for two reasons. First, the soils of the region are mainly monogenetic; thus, the effect of paleogeographic environment on recent soils is reduced. Second, the territory was poorly mapped because of low agricultural development: only 1.8% of the total area of the republic is used for agriculture and has large-scale soil maps. The rest of the territory has only small-scale soil maps, compiled basing on the general geographic concepts rather than on field surveys. Thus, the only solution for soil inventory was the predictive digital mapping. The absence of large-scaled soil maps did not allow data mining from previous soil surveys, and only empirical models could be applied. For regionalization purposes, we accepted the division into Northern and Southern Karelia, proposed in the general scheme of soil regionalization of Russia; boundaries between the regions were somewhat modified. Within each region, we specified from 15 (Northern Karelia) to 32 (Southern Karelia) individual soilscapes and proposed soil-topographic and soil-lithological relationships for every soilscape. Further field verification is needed to adjust the models.

  11. A study of changes in middle school teachers' understanding of selected ideas in science as a function of an in-service program focusing on student preconceptions

    NASA Astrophysics Data System (ADS)

    Shymansky, James A.; Woodworth, George; Norman, Obed; Dunkhase, John; Matthews, Charles; Liu, Chin-Tang

    This article examines the impact of a specially designed in-service model on teacher understanding of selected science concepts. The underlying idea of the model is to get teachers to restructure their own understanding of a selected science topic by having them study the structure and evolution of their students' ideas on the same topic. Concepts on topics from the life, earth, and physical sciences served as the content focus and middle school Grades 4-9 served as the context for this study. The in-service experience constituting the main treatment in the study occurred in three distinct phases. In the initial phase, participating teachers interviewed several of their own students to find out what kinds of preconceptions students had about a particular topic. The teachers used concept mapping strategies learned in the in-service to facilitate the interviews. Next the teachers teamed with other teachers with similar topic interests and a science expert to evaluate and explore the scientific merit of the student conceptual frameworks and to develop instructional units, including a summative assessment during a summer workshop. Finally, the student ideas were further evaluated and explored as the teachers taught the topics in their classrooms during the fall term. Concept maps were used to study changes in teacher understanding across the phases of the in-service in a repeated-measures design. Analysis of the maps showed significant growth in the number of valid propositions expressed by teachers between the initial and final mappings in all topic groups. But in half of the groups, this long-term growth was interrupted by a noticeable decline in the number of valid propositions expressed. In addition, analysis of individual teacher maps showed distinctive patterns of initial invalid conceptions being replaced by new invalid conceptions in later mappings. The combination of net growth of valid propositions and the patterns of evolving invalid conceptions is discussed in constructivist terms.

  12. Simulation of facial expressions using person-specific sEMG signals controlling a biomechanical face model.

    PubMed

    Eskes, Merijn; Balm, Alfons J M; van Alphen, Maarten J A; Smeele, Ludi E; Stavness, Ian; van der Heijden, Ferdinand

    2018-01-01

    Functional inoperability in advanced oral cancer is difficult to assess preoperatively. To assess functions of lips and tongue, biomechanical models are required. Apart from adjusting generic models to individual anatomy, muscle activation patterns (MAPs) driving patient-specific functional movements are necessary to predict remaining functional outcome. We aim to evaluate how volunteer-specific MAPs derived from surface electromyographic (sEMG) signals control a biomechanical face model. Muscle activity of seven facial muscles in six volunteers was measured bilaterally with sEMG. A triple camera set-up recorded 3D lip movement. The generic face model in ArtiSynth was adapted to our needs. We controlled the model using the volunteer-specific MAPs. Three activation strategies were tested: activating all muscles [Formula: see text], selecting the three muscles showing highest muscle activity bilaterally [Formula: see text]-this was calculated by taking the mean of left and right muscles and then selecting the three with highest variance-and activating the muscles considered most relevant per instruction [Formula: see text], bilaterally. The model's lip movement was compared to the actual lip movement performed by the volunteers, using 3D correlation coefficients [Formula: see text]. The correlation coefficient between simulations and measurements with [Formula: see text] resulted in a median [Formula: see text] of 0.77. [Formula: see text] had a median [Formula: see text] of 0.78, whereas with [Formula: see text] the median [Formula: see text] decreased to 0.45. We demonstrated that MAPs derived from noninvasive sEMG measurements can control movement of the lips in a generic finite element face model with a median [Formula: see text] of 0.78. Ultimately, this is important to show the patient-specific residual movement using the patient's own MAPs. When the required treatment tools and personalisation techniques for geometry and anatomy become available, this may enable surgeons to test the functional results of wedge excisions for lip cancer in a virtual environment and to weigh surgery versus organ-sparing radiotherapy or photodynamic therapy.

  13. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  14. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as

  15. The Cellular Automata for modelling of spreading of lava flow on the earth surface

    NASA Astrophysics Data System (ADS)

    Jarna, A.

    2012-12-01

    Volcanic risk assessment is a very important scientific, political and economic issue in densely populated areas close to active volcanoes. Development of effective tools for early prediction of a potential volcanic hazard and management of crises are paramount. However, to this date volcanic hazard maps represent the most appropriate way to illustrate the geographical area that can potentially be affected by a volcanic event. Volcanic hazard maps are usually produced by mapping out old volcanic deposits, however dynamic lava flow simulation gaining popularity and can give crucial information to corroborate other methodologies. The methodology which is used here for the generation of volcanic hazard maps is based on numerical simulation of eruptive processes by the principle of Cellular Automata (CA). The python script is integrated into ArcToolbox in ArcMap (ESRI) and the user can select several input and output parameters which influence surface morphology, size and shape of the flow, flow thickness, flow velocity and length of lava flows. Once the input parameters are selected, the software computes and generates hazard maps on the fly. The results can be exported to Google Maps (.klm format) to visualize the results of the computation. For validation of the simulation code are used data from a real lava flow. Comparison of the simulation results with real lava flows mapped out from satellite images will be presented.

  16. Quantification of mammographic masking risk with volumetric breast density maps: how to select women for supplemental screening

    NASA Astrophysics Data System (ADS)

    Holland, Katharina; van Gils, Carla H.; Wanders, Johanna OP; Mann, Ritse M.; Karssemeijer, Nico

    2016-03-01

    The sensitivity of mammograms is low for women with dense breasts, since cancers may be masked by dense tissue. In this study, we investigated methods to identify women with density patterns associated with a high masking risk. Risk measures are derived from volumetric breast density maps. We used the last negative screening mammograms of 93 women who subsequently presented with an interval cancer (IC), and, as controls, 930 randomly selected normal screening exams from women without cancer. Volumetric breast density maps were computed from the mammograms, which provide the dense tissue thickness at each location. These were used to compute absolute and percentage glandular tissue volume. We modeled the masking risk for each pixel location using the absolute and percentage dense tissue thickness and we investigated the effect of taking the cancer location probability distribution (CLPD) into account. For each method, we selected cases with the highest masking measure (by thresholding) and computed the fraction of ICs as a function of the fraction of controls selected. The latter can be interpreted as the negative supplemental screening rate (NSSR). Between the models, when incorporating CLPD, no significant differences were found. In general, the methods performed better when CLPD was included. At higher NSSRs some of the investigated masking measures had a significantly higher performance than volumetric breast density. These measures may therefore serve as an alternative to identify women with a high risk for a masked cancer.

  17. Remotely sensed soil moisture input to a hydrologic model

    NASA Technical Reports Server (NTRS)

    Engman, E. T.; Kustas, W. P.; Wang, J. R.

    1989-01-01

    The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.

  18. Performance mapping of a 30 cm engineering model thruster

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Vahrenkamp, R. P.

    1975-01-01

    A 30 cm thruster representative of the engineering model design has been tested over a wide range of operating parameters to document performance characteristics such as electrical and propellant efficiencies, double ion and beam divergence thrust loss, component equilibrium temperatures, operational stability, etc. Data obtained show that optimum power throttling, in terms of maximum thruster efficiency, is not highly sensitive to parameter selection. Consequently, considerations of stability, discharge chamber erosion, thrust losses, etc. can be made the determining factors for parameter selection in power throttling operations. Options in parameter selection based on these considerations are discussed.

  19. Usage of Data-Encoded Web Maps with Client Side Color Rendering for Combined Data Access, Visualization and Modeling Purposes

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narashimha S.

    2013-01-01

    Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.

  20. Signatures of positive selection: from selective sweeps at individual loci to subtle allele frequency changes in polygenic adaptation.

    PubMed

    Stephan, Wolfgang

    2016-01-01

    In the past 15 years, numerous methods have been developed to detect selective sweeps underlying adaptations. These methods are based on relatively simple population genetic models, including one or two loci at which positive directional selection occurs, and one or two marker loci at which the impact of selection on linked neutral variation is quantified. Information about the phenotype under selection is not included in these models (except for fitness). In contrast, in the quantitative genetic models of adaptation, selection acts on one or more phenotypic traits, such that a genotype-phenotype map is required to bridge the gap to population genetics theory. Here I describe the range of population genetic models from selective sweeps in a panmictic population of constant size to evolutionary traffic when simultaneous sweeps at multiple loci interfere, and I also consider the case of polygenic selection characterized by subtle allele frequency shifts at many loci. Furthermore, I present an overview of the statistical tests that have been proposed based on these population genetics models to detect evidence for positive selection in the genome. © 2015 John Wiley & Sons Ltd.

  1. Spatiotemporal mapping of scalp potentials.

    PubMed

    Fender, D H; Santoro, T P

    1977-11-01

    Computerized analysis and display techniques are applied to the problem of identifying the origins of visually evoked scalped potentials (VESP's). A new stimulus for VESP work, white noise, is being incorporated in the solution of this problem. VESP's for white-noise stimulation exhibit time domain behavior similar to the classical response for flash stimuli but with certain significant differences. Contour mapping algorithms are used to display the time behavior of equipotential surfaces on the scalp during the VESP. The electrical and geometrical parameters of the head are modeled. Electrical fields closely matching those obtained experimentally are generated on the surface of the model head by optimally selecting the location and strength parameters of one or two dipole current sources contained within the model. Computer graphics are used to display as a movie the actual and model scalp potential field and the parameters of the dipole generators whithin the model head during the time course of the VESP. These techniques are currently used to study retinotopic mapping, fusion, and texture perception in the human.

  2. Modeling adsorption properties of structurally deformed metal–organic frameworks using structure–property map

    PubMed Central

    Lim, Dae-Woon; Kim, Sungjune; Harale, Aadesh; Yoon, Minyoung; Suh, Myunghyun Paik; Kim, Jihan

    2017-01-01

    Structural deformation and collapse in metal-organic frameworks (MOFs) can lead to loss of long-range order, making it a challenge to model these amorphous materials using conventional computational methods. In this work, we show that a structure–property map consisting of simulated data for crystalline MOFs can be used to indirectly obtain adsorption properties of structurally deformed MOFs. The structure–property map (with dimensions such as Henry coefficient, heat of adsorption, and pore volume) was constructed using a large data set of over 12000 crystalline MOFs from molecular simulations. By mapping the experimental data points of deformed SNU-200, MOF-5, and Ni-MOF-74 onto this structure–property map, we show that the experimentally deformed MOFs share similar adsorption properties with their nearest neighbor crystalline structures. Once the nearest neighbor crystalline MOFs for a deformed MOF are selected from a structure–property map at a specific condition, then the adsorption properties of these MOFs can be successfully transformed onto the degraded MOFs, leading to a new way to obtain properties of materials whose structural information is lost. PMID:28696307

  3. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  4. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  5. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    NASA Astrophysics Data System (ADS)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over-fit the data. The performance of BRT was slightly worse than RF. GeoGAM performed poorly on some responses and was the best only for 7 of 48 responses. The prediction accuracy of lasso was intermediate. All models generally had small bias. Only the computationally very efficient lasso had slightly larger bias because it tended to under-fit the data. Summarizing, although differences were small, the frequencies of the best and worst performance clearly favoured RF if a single method is applied and MA if multiple prediction models can be developed.

  6. Genomic selection & association mapping in rice: effect of trait genetic architecture, training population composition, marker number & statistical model on accuracy of rice genomic selection in elite, tropical rice breeding

    USDA-ARS?s Scientific Manuscript database

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its ef...

  7. Mapping raised bogs with an iterative one-class classification approach

    NASA Astrophysics Data System (ADS)

    Mack, Benjamin; Roscher, Ribana; Stenzel, Stefanie; Feilhauer, Hannes; Schmidtlein, Sebastian; Waske, Björn

    2016-10-01

    Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative OCC approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative OCC outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.

  8. New Maximum Tsunami Inundation Maps for Use by Local Emergency Planners in the State of California, USA

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.

    2008-12-01

    A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.

  9. Using dynamic population simulations to extend resource selection analyses and prioritize habitats for conservation

    USGS Publications Warehouse

    Heinrichs, Julie; Aldridge, Cameron L.; O'Donnell, Michael; Schumaker, Nathan

    2017-01-01

    Prioritizing habitats for conservation is a challenging task, particularly for species with fluctuating populations and seasonally dynamic habitat needs. Although the use of resource selection models to identify and prioritize habitat for conservation is increasingly common, their ability to characterize important long-term habitats for dynamic populations are variable. To examine how habitats might be prioritized differently if resource selection was directly and dynamically linked with population fluctuations and movement limitations among seasonal habitats, we constructed a spatially explicit individual-based model for a dramatically fluctuating population requiring temporally varying resources. Using greater sage-grouse (Centrocercus urophasianus) in Wyoming as a case study, we used resource selection function maps to guide seasonal movement and habitat selection, but emergent population dynamics and simulated movement limitations modified long-term habitat occupancy. We compared priority habitats in RSF maps to long-term simulated habitat use. We examined the circumstances under which the explicit consideration of movement limitations, in combination with population fluctuations and trends, are likely to alter predictions of important habitats. In doing so, we assessed the future occupancy of protected areas under alternative population and habitat conditions. Habitat prioritizations based on resource selection models alone predicted high use in isolated parcels of habitat and in areas with low connectivity among seasonal habitats. In contrast, results based on more biologically-informed simulations emphasized central and connected areas near high-density populations, sometimes predicted to be low selection value. Dynamic models of habitat use can provide additional biological realism that can extend, and in some cases, contradict habitat use predictions generated from short-term or static resource selection analyses. The explicit inclusion of population dynamics and movement propensities via spatial simulation modeling frameworks may provide an informative means of predicting long-term habitat use, particularly for fluctuating populations with complex seasonal habitat needs. Importantly, our results indicate the possible need to consider habitat selection models as a starting point rather than the common end point for refining and prioritizing habitats for protection for cyclic and highly variable populations.

  10. Crop biometric maps: the key to prediction.

    PubMed

    Rovira-Más, Francisco; Sáiz-Rubio, Verónica

    2013-09-23

    The sustainability of agricultural production in the twenty-first century, both in industrialized and developing countries, benefits from the integration of farm management with information technology such that individual plants, rows, or subfields may be endowed with a singular "identity." This approach approximates the nature of agricultural processes to the engineering of industrial processes. In order to cope with the vast variability of nature and the uncertainties of agricultural production, the concept of crop biometrics is defined as the scientific analysis of agricultural observations confined to spaces of reduced dimensions and known position with the purpose of building prediction models. This article develops the idea of crop biometrics by setting its principles, discussing the selection and quantization of biometric traits, and analyzing the mathematical relationships among measured and predicted traits. Crop biometric maps were applied to the case of a wine-production vineyard, in which vegetation amount, relative altitude in the field, soil compaction, berry size, grape yield, juice pH, and grape sugar content were selected as biometric traits. The enological potential of grapes was assessed with a quality-index map defined as a combination of titratable acidity, sugar content, and must pH. Prediction models for yield and quality were developed for high and low resolution maps, showing the great potential of crop biometric maps as a strategic tool for vineyard growers as well as for crop managers in general, due to the wide versatility of the methodology proposed.

  11. Crop Biometric Maps: The Key to Prediction

    PubMed Central

    Rovira-Más, Francisco; Sáiz-Rubio, Verónica

    2013-01-01

    The sustainability of agricultural production in the twenty-first century, both in industrialized and developing countries, benefits from the integration of farm management with information technology such that individual plants, rows, or subfields may be endowed with a singular “identity.” This approach approximates the nature of agricultural processes to the engineering of industrial processes. In order to cope with the vast variability of nature and the uncertainties of agricultural production, the concept of crop biometrics is defined as the scientific analysis of agricultural observations confined to spaces of reduced dimensions and known position with the purpose of building prediction models. This article develops the idea of crop biometrics by setting its principles, discussing the selection and quantization of biometric traits, and analyzing the mathematical relationships among measured and predicted traits. Crop biometric maps were applied to the case of a wine-production vineyard, in which vegetation amount, relative altitude in the field, soil compaction, berry size, grape yield, juice pH, and grape sugar content were selected as biometric traits. The enological potential of grapes was assessed with a quality-index map defined as a combination of titratable acidity, sugar content, and must pH. Prediction models for yield and quality were developed for high and low resolution maps, showing the great potential of crop biometric maps as a strategic tool for vineyard growers as well as for crop managers in general, due to the wide versatility of the methodology proposed. PMID:24064605

  12. A feature selection approach towards progressive vector transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Miao, Ru; Song, Jia; Feng, Min

    2017-09-01

    WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.

  13. MOLA-Based Landing Site Characterization

    NASA Technical Reports Server (NTRS)

    Duxbury, T. C.; Ivanov, A. B.

    2001-01-01

    The Mars Global Surveyor (MGS) Mars Orbiter Laser Altimeter (MOLA) data provide the basis for site characterization and selection never before possible. The basic MOLA information includes absolute radii, elevation and 1 micrometer albedo with derived datasets including digital image models (DIM's illuminated elevation data), slopes maps and slope statistics and small scale surface roughness maps and statistics. These quantities are useful in downsizing potential sites from descent engineering constraints and landing/roving hazard and mobility assessments. Slope baselines at the few hundred meter level and surface roughness at the 10 meter level are possible. Additionally, the MOLA-derived Mars surface offers the possibility to precisely register and map project other instrument datasets (images, ultraviolet, infrared, radar, etc.) taken at different resolution, viewing and lighting geometry, building multiple layers of an information cube for site characterization and selection. Examples of direct MOLA data, data derived from MOLA and other instruments data registered to MOLA arc given for the Hematite area.

  14. The development of an adolescent smoking cessation intervention--an Intervention Mapping approach to planning.

    PubMed

    Dalum, Peter; Schaalma, Herman; Kok, Gerjo

    2012-02-01

    The objective of this project was to develop a theory- and evidence-based adolescent smoking cessation intervention using both new and existing materials. We used the Intervention Mapping framework for planning health promotion programmes. Based on a needs assessment, we identified important and changeable determinants of cessation behaviour, specified change objectives for the intervention programme, selected theoretical change methods for accomplishing intervention objectives and finally operationalized change methods into practical intervention strategies. We found that guided practice, modelling, self-monitoring, coping planning, consciousness raising, dramatic relief and decisional balance were suitable methods for adolescent smoking cessation. We selected behavioural journalism, guided practice and Motivational Interviewing as strategies in our intervention. Intervention Mapping helped us to develop as systematic adolescent smoking cessation intervention with a clear link between behavioural goals, theoretical methods, practical strategies and materials and with a strong focus on implementation and recruitment. This paper does not present evaluation data.

  15. Assimilation of optical and radar remote sensing data in 3D mapping of soil properties over large areas.

    PubMed

    Poggio, Laura; Gimona, Alessandro

    2017-02-01

    Soil is very important for many land functions. To achieve sustainability it is important to understand how soils vary over space in the landscape. Remote sensing data can be instrumental in mapping and spatial modelling of soil properties, resources and their variability. The aims of this study were to compare satellite sensors (MODIS, Landsat, Sentinel-1 and Sentinel-2) with varying spatial, temporal and spectral resolutions for Digital Soil Mapping (DSM) of a set of soil properties in Scotland, evaluate the potential benefits of adding Sentinel-1 data to DSM models, select the most suited mix of sensors for DSM to map the considered set of soil properties and validate the results of topsoil (2D) and whole profile (3D) models. The results showed that the use of a mixture of sensors proved more effective to model and map soil properties than single sensors. The use of radar Sentinel-1 data proved useful for all soil properties, improving the prediction capability of models with only optical bands. The use of MODIS time series provided stronger relationships than the use of temporal snapshots. The results showed good validation statistics with a RMSE below 20% of the range for all considered soil properties. The RMSE improved from previous studies including only MODIS sensor and using a coarser prediction grid. The performance of the models was similar to previous studies at regional, national or continental scale. A mix of optical and radar data proved useful to map soil properties along the profile. The produced maps of soil properties describing both lateral and vertical variability, with associated uncertainty, are important for further modelling and management of soil resources and ecosystem services. Coupled with further data the soil properties maps could be used to assess soil functions and therefore conditions and suitability of soils for a range of purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The Necessity for Routine Pre-operative Ultrasound Mapping Before Arteriovenous Fistula Creation: A Meta-analysis.

    PubMed

    Georgiadis, G S; Charalampidis, D G; Argyriou, C; Georgakarakos, E I; Lazarides, M K

    2015-05-01

    Existing guidelines suggest routine use of pre-operative color Doppler ultrasound (DUS) vessel mapping before the creation of arteriovenous fistulae (AVF); however, there is controversy about its benefit over traditional clinical examination or selective ultrasound use. This was a systematic review and meta-analysis of randomized controlled trials (RCTs) comparing routine DUS mapping before the creation of AVF with patients for whom the decision for AVF placement was based on clinical examination and selective ultrasound use. A search of MEDLINE/PubMed, SCOPUS, and the Cochrane Library was carried out in June 2014. The analyzed outcome measures were the immediate failure rate and the early/midterm adequacy of the fistula for hemodialysis. Additionally, assessment of the methodological quality of the included studies was carried out. Five studies (574 patients) were analyzed. A random effects model was used to pool the data. The pooled odds ratio (OR) for the immediate failure rate was 0.32 (95% confidence interval [CI] 0.17-0.60; p < .01), which was significantly in favor of the DUS mapping group. The pooled OR for the early/midterm adequacy for hemodialysis was 0.66 (95% CI 0.42-1.03; p = .06), with a trend in favor of the DUS mapping group; however, subgroup analysis revealed that routine DUS mapping was more beneficial than selective DUS (p < .05). The available evidence, based mainly on moderate quality RCTs, suggests that the pre-operative clinical examination should always be supplemented with routine DUS mapping before AVF creation. This policy avoids negative surgical explorations and significantly reduces the immediate AVF failure rate. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  17. Exploiting rice-sorghum synteny for targeted development of EST-SSRs to enrich the sorghum genetic linkage map.

    PubMed

    Ramu, P; Kassahun, B; Senthilvel, S; Ashok Kumar, C; Jayashree, B; Folkertsma, R T; Reddy, L Ananda; Kuruvinashetti, M S; Haussmann, B I G; Hash, C T

    2009-11-01

    The sequencing and detailed comparative functional analysis of genomes of a number of select botanical models open new doors into comparative genomics among the angiosperms, with potential benefits for improvement of many orphan crops that feed large populations. In this study, a set of simple sequence repeat (SSR) markers was developed by mining the expressed sequence tag (EST) database of sorghum. Among the SSR-containing sequences, only those sharing considerable homology with rice genomic sequences across the lengths of the 12 rice chromosomes were selected. Thus, 600 SSR-containing sorghum EST sequences (50 homologous sequences on each of the 12 rice chromosomes) were selected, with the intention of providing coverage for corresponding homologous regions of the sorghum genome. Primer pairs were designed and polymorphism detection ability was assessed using parental pairs of two existing sorghum mapping populations. About 28% of these new markers detected polymorphism in this 4-entry panel. A subset of 55 polymorphic EST-derived SSR markers were mapped onto the existing skeleton map of a recombinant inbred population derived from cross N13 x E 36-1, which is segregating for Striga resistance and the stay-green component of terminal drought tolerance. These new EST-derived SSR markers mapped across all 10 sorghum linkage groups, mostly to regions expected based on prior knowledge of rice-sorghum synteny. The ESTs from which these markers were derived were then mapped in silico onto the aligned sorghum genome sequence, and 88% of the best hits corresponded to linkage-based positions. This study demonstrates the utility of comparative genomic information in targeted development of markers to fill gaps in linkage maps of related crop species for which sufficient genomic tools are not available.

  18. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  19. Advances and Limitations of Disease Biogeography Using Ecological Niche Modeling

    PubMed Central

    Escobar, Luis E.; Craft, Meggan E.

    2016-01-01

    Mapping disease transmission risk is crucial in public and animal health for evidence based decision-making. Ecology and epidemiology are highly related disciplines that may contribute to improvements in mapping disease, which can be used to answer health related questions. Ecological niche modeling is increasingly used for understanding the biogeography of diseases in plants, animals, and humans. However, epidemiological applications of niche modeling approaches for disease mapping can fail to generate robust study designs, producing incomplete or incorrect inferences. This manuscript is an overview of the history and conceptual bases behind ecological niche modeling, specifically as applied to epidemiology and public health; it does not pretend to be an exhaustive and detailed description of ecological niche modeling literature and methods. Instead, this review includes selected state-of-the-science approaches and tools, providing a short guide to designing studies incorporating information on the type and quality of the input data (i.e., occurrences and environmental variables), identification and justification of the extent of the study area, and encourages users to explore and test diverse algorithms for more informed conclusions. We provide a friendly introduction to the field of disease biogeography presenting an updated guide for researchers looking to use ecological niche modeling for disease mapping. We anticipate that ecological niche modeling will soon be a critical tool for epidemiologists aiming to map disease transmission risk, forecast disease distribution under climate change scenarios, and identify landscape factors triggering outbreaks. PMID:27547199

  20. Advances and Limitations of Disease Biogeography Using Ecological Niche Modeling.

    PubMed

    Escobar, Luis E; Craft, Meggan E

    2016-01-01

    Mapping disease transmission risk is crucial in public and animal health for evidence based decision-making. Ecology and epidemiology are highly related disciplines that may contribute to improvements in mapping disease, which can be used to answer health related questions. Ecological niche modeling is increasingly used for understanding the biogeography of diseases in plants, animals, and humans. However, epidemiological applications of niche modeling approaches for disease mapping can fail to generate robust study designs, producing incomplete or incorrect inferences. This manuscript is an overview of the history and conceptual bases behind ecological niche modeling, specifically as applied to epidemiology and public health; it does not pretend to be an exhaustive and detailed description of ecological niche modeling literature and methods. Instead, this review includes selected state-of-the-science approaches and tools, providing a short guide to designing studies incorporating information on the type and quality of the input data (i.e., occurrences and environmental variables), identification and justification of the extent of the study area, and encourages users to explore and test diverse algorithms for more informed conclusions. We provide a friendly introduction to the field of disease biogeography presenting an updated guide for researchers looking to use ecological niche modeling for disease mapping. We anticipate that ecological niche modeling will soon be a critical tool for epidemiologists aiming to map disease transmission risk, forecast disease distribution under climate change scenarios, and identify landscape factors triggering outbreaks.

  1. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  2. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2009-03-01

    Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  3. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2008-12-01

    Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  4. Interactive Web Interface to the Global Strain Rate Map Project

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Estey, L.; Kreemer, C.; Holt, W.

    2004-05-01

    An interactive web interface allows users to explore the results of a global strain rate and velocity model and to compare them to other geophysical observations. The most recent model, an updated version of Kreemer et al., 2003, has 25 independent rigid plate-like regions separated by deformable boundaries covered by about 25,000 grid areas. A least-squares fit was made to 4900 geodetic velocities from 79 different geodetic studies. In addition, Quaternary fault slip rate data are used to infer geologic strain rate estimates (currently only for central Asia). Information about the style and direction of expected strain rate is inferred from the principal axes of the seismic strain rate field. The current model, as well as source data, references and an interactive map tool, are located at the International Lithosphere Program (ILP) "A Global Strain Rate Map (ILP II-8)" project website: http://www-world-strain-map.org. The purpose of the ILP GSRM project is to provide new information from this, and other investigations, that will contribute to a better understanding of continental dynamics and to the quantification of seismic hazards. A unique aspect of the GSRM interactive Java map tool is that the user can zoom in and make custom views of the model grid and results for any area of the globe selecting strain rate and style contour plots and principal axes, observed and model velocity fields in specified frames of reference, and geologic fault data. The results can be displayed with other data sets such Harvard CMT earthquake focal mechanisms, stress directions from the ILP World Stress Map Project, and topography. With the GSRM Java map tool, the user views custom maps generated by a Generic Mapping Tool (GMT) server. These interactive capabilities greatly extend what is possible to present in a published paper. A JavaScript version, using pre-constructed maps, as well as a related information site have also been created for broader education and outreach access. The GSRM map tool will be demonstrated and latest model GSRM 1.1 results, containing important new data for Asia, Iran, western Pacific, and Southern California, will be presented.

  5. An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics

    PubMed Central

    Gupta, Priti; Markan, C. M.

    2014-01-01

    The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID:24765062

  6. Delineation of recharge areas for selected wells in the St. Peter-Prairie du Chien-Jordan Aquifer, Rochester, Minnesota

    USGS Publications Warehouse

    Delin, G.N.; Almendinger, James Edward

    1991-01-01

    Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries like streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.

  7. Delineation of recharge areas for selected wells in the St. Peter-Prairie du Chien-Jordan aquifer, Rochester, Minnesota

    USGS Publications Warehouse

    Delin, G.N.; Almendinger, James Edward

    1993-01-01

    Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries such as streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.

  8. Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI.

    PubMed

    Bhalerao, Abhir; Westin, Carl-Fredrik

    2007-01-01

    A mapping of unit vectors onto a 5D hypersphere is used to model and partition ODFs from HARDI data. This mapping has a number of useful and interesting properties and we make a link to interpretation of the second order spherical harmonic decompositions of HARDI data. The paper presents the working theory and experiments of using a von Mises-Fisher mixture model for directional samples. The MLE of the second moment of the HvMF pdf can also be related to fractional anisotropy. We perform error analysis of the estimation scheme in single and multi-fibre regions and then show how a penalised-likelihood model selection method can be employed to differentiate single and multiple fibre regions.

  9. Causal Client Models in Selecting Effective Interventions: A Cognitive Mapping Study

    ERIC Educational Resources Information Center

    de Kwaadsteniet, Leontien; Hagmayer, York; Krol, Nicole P. C. M.; Witteman, Cilia L. M.

    2010-01-01

    An important reason to choose an intervention to treat psychological problems of clients is the expectation that the intervention will be effective in alleviating the problems. The authors investigated whether clinicians base their ratings of the effectiveness of interventions on models that they construct representing the factors causing and…

  10. Genome re-sequencing reveals the history of apple and supports a two-stage model for fruit enlargement

    USDA-ARS?s Scientific Manuscript database

    Human selection has reshaped crop genomes. Here we report an apple genome variation map generated through genome sequencing of 117 diverse accessions. A comprehensive model of apple speciation and domestication along the Silk Road was proposed based on evidence from diverse genomic analyses. Cultiva...

  11. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.

  12. Limitations to mapping habitat-use areas in changing landscapes using the Mahalanobis distance statistic

    USGS Publications Warehouse

    Knick, Steven T.; Rotenberry, J.T.

    1998-01-01

    We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.

  13. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  14. Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies

    NASA Astrophysics Data System (ADS)

    Perez Hoyos, Isabel Cristina

    The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.

  15. Discovery of Type II Inhibitors of TGFβ-Activated Kinase 1 (TAK1) and Mitogen-Activated Protein Kinase Kinase Kinase Kinase 2 (MAP4K2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Nomanbhoy, Tyzoon; Gurbani, Deepak

    Here, we developed a pharmacophore model for type II inhibitors that was used to guide the construction of a library of kinase inhibitors. Kinome-wide selectivity profiling of the library resulted in the identification of a series of 4-substituted 1H-pyrrolo[2,3-b]pyridines that exhibited potent inhibitory activity against two mitogen-activated protein kinases (MAPKs), TAK1 (MAP3K7) and MAP4K2, as well as pharmacologically well interrogated kinases such as p38α (MAPK14) and ABL. Further investigation of the structure–activity relationship (SAR) resulted in the identification of potent dual TAK1 and MAP4K2 inhibitors such as 1 (NG25) and 2 as well as MAP4K2 selective inhibitors such as 16more » and 17. Some of these inhibitors possess good pharmacokinetic properties that will enable their use in pharmacological studies in vivo. Lastly, a 2.4 Å cocrystal structure of TAK1 in complex with 1 confirms that the activation loop of TAK1 assumes the DFG-out conformation characteristic of type II inhibitors.« less

  16. Discovery of Type II Inhibitors of TGFβ-Activated Kinase 1 (TAK1) and Mitogen-Activated Protein Kinase Kinase Kinase Kinase 2 (MAP4K2)

    PubMed Central

    2015-01-01

    We developed a pharmacophore model for type II inhibitors that was used to guide the construction of a library of kinase inhibitors. Kinome-wide selectivity profiling of the library resulted in the identification of a series of 4-substituted 1H-pyrrolo[2,3-b]pyridines that exhibited potent inhibitory activity against two mitogen-activated protein kinases (MAPKs), TAK1 (MAP3K7) and MAP4K2, as well as pharmacologically well interrogated kinases such as p38α (MAPK14) and ABL. Further investigation of the structure–activity relationship (SAR) resulted in the identification of potent dual TAK1 and MAP4K2 inhibitors such as 1 (NG25) and 2 as well as MAP4K2 selective inhibitors such as 16 and 17. Some of these inhibitors possess good pharmacokinetic properties that will enable their use in pharmacological studies in vivo. A 2.4 Å cocrystal structure of TAK1 in complex with 1 confirms that the activation loop of TAK1 assumes the DFG-out conformation characteristic of type II inhibitors. PMID:25075558

  17. Discovery of Type II Inhibitors of TGFβ-Activated Kinase 1 (TAK1) and Mitogen-Activated Protein Kinase Kinase Kinase Kinase 2 (MAP4K2)

    DOE PAGES

    Tan, Li; Nomanbhoy, Tyzoon; Gurbani, Deepak; ...

    2014-07-17

    Here, we developed a pharmacophore model for type II inhibitors that was used to guide the construction of a library of kinase inhibitors. Kinome-wide selectivity profiling of the library resulted in the identification of a series of 4-substituted 1H-pyrrolo[2,3-b]pyridines that exhibited potent inhibitory activity against two mitogen-activated protein kinases (MAPKs), TAK1 (MAP3K7) and MAP4K2, as well as pharmacologically well interrogated kinases such as p38α (MAPK14) and ABL. Further investigation of the structure–activity relationship (SAR) resulted in the identification of potent dual TAK1 and MAP4K2 inhibitors such as 1 (NG25) and 2 as well as MAP4K2 selective inhibitors such as 16more » and 17. Some of these inhibitors possess good pharmacokinetic properties that will enable their use in pharmacological studies in vivo. Lastly, a 2.4 Å cocrystal structure of TAK1 in complex with 1 confirms that the activation loop of TAK1 assumes the DFG-out conformation characteristic of type II inhibitors.« less

  18. Association of MAP4K4 gene single nucleotide polymorphism with mastitis and milk traits in Chinese Holstein cattle.

    PubMed

    Bhattarai, Dinesh; Chen, Xing; Ur Rehman, Zia; Hao, Xingjie; Ullah, Farman; Dad, Rahim; Talpur, Hira Sajjad; Kadariya, Ishwari; Cui, Lu; Fan, Mingxia; Zhang, Shujun

    2017-02-01

    The objective of the studies presented in this Research Communication was to investigate the association of single nucleotide polymorphisms present in the MAP4K4 gene with different milk traits in dairy cows. Based on previous QTL fine mapping results on bovine chromosome 11, the MAP4K4 gene was selected as a candidate gene to evaluate its effect on somatic cell count and milk traits in ChineseHolstein cows. Milk production traits including milk yield, fat percentage, and protein percentage of each cow were collected using 305 d lactation records. Association between MAP4K4 genotype and different traits and Somatic Cell Score (SCS) was performed using General Linear Regression Model of R. Two SNPs at exon 18 (c.2061T > G and c.2196T > C) with genotype TT in both SNPs were found significantly higher for somatic SCS. We found the significant effect of exon 18 (c.2061T > G) on protein percentage, milk yield and SCS. We identified SNPs at different location of MAP4K4 gene of the cattle and several of them were significantly associated with the somatic cell score and other different milk traits. Thus, MAP4K4 gene could be a useful candidate gene for selection of dairy cattle against mastitis and the identified polymorphisms might potentially be strong genetic markers.

  19. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  20. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  1. SOM guided fuzzy logic prospectivity model for gold in the Häme Belt, southwestern Finland

    NASA Astrophysics Data System (ADS)

    Leväniemi, Hanna; Hulkki, Helena; Tiainen, Markku

    2017-04-01

    This study investigated gold prospectivity in the Paleoproterozoic Häme Belt, located in southwestern Finland. The Häme Belt comprises calc-alkaline and tholeitic volcanic rocks, migmatites, granitoids, and mafic to ultramafic intrusions. Mineral exploration in the region has resulted in the discovery of several gold occurrences during recent decades; however, no prospectivity modeling for gold has yet been conducted. This study integrated till geochemical and geophysical data to examine and extract data characteristics critical for gold occurrences. Modeling was guided by self-organizing map (SOM) analysis to define essential data associations and to aid in model input data selection and generation. The final fuzzy logic prospectivity model map yielded high predictability values for most known Au or Cu-Au occurrences, but also highlighted new targets for exploration.

  2. Probability of detecting atrazine/desethyl-atrazine and elevated concentrations of nitrate in ground water in Colorado

    USGS Publications Warehouse

    Rupert, Michael G.

    2003-01-01

    Draft Federal regulations may require that each State develop a State Pesticide Management Plan for the herbicides atrazine, alachlor, metolachlor, and simazine. Maps were developed that the State of Colorado could use to predict the probability of detecting atrazine and desethyl-atrazine (a breakdown product of atrazine) in ground water in Colorado. These maps can be incorporated into the State Pesticide Management Plan and can help provide a sound hydrogeologic basis for atrazine management in Colorado. Maps showing the probability of detecting elevated nitrite plus nitrate as nitrogen (nitrate) concentrations in ground water in Colorado also were developed because nitrate is a contaminant of concern in many areas of Colorado. Maps showing the probability of detecting atrazine and(or) desethyl-atrazine (atrazine/DEA) at or greater than concentrations of 0.1 microgram per liter and nitrate concentrations in ground water greater than 5 milligrams per liter were developed as follows: (1) Ground-water quality data were overlaid with anthropogenic and hydrogeologic data using a geographic information system to produce a data set in which each well had corresponding data on atrazine use, fertilizer use, geology, hydrogeomorphic regions, land cover, precipitation, soils, and well construction. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Relations were observed between ground-water quality and the percentage of land-cover categories within circular regions (buffers) around wells. Several buffer sizes were evaluated; the buffer size that provided the strongest relation was selected for use in the logistic regression models. (3) Relations between concentrations of atrazine/DEA and nitrate in ground water and atrazine use, fertilizer use, geology, hydrogeomorphic regions, land cover, precipitation, soils, and well-construction data were evaluated, and several preliminary multivariate models with various combinations of independent variables were constructed. (4) The multivariate models that best predicted the presence of atrazine/DEA and elevated concentrations of nitrate in ground water were selected. (5) The accuracy of the multivariate models was confirmed by validating the models with an independent set of ground-water quality data. (6) The multivariate models were entered into a geographic information system and the probability maps were constructed.

  3. Machine learning for predicting soil classes in three semi-arid landscapes

    USGS Publications Warehouse

    Brungard, Colby W.; Boettinger, Janis L.; Duniway, Michael C.; Wills, Skye A.; Edwards, Thomas C.

    2015-01-01

    Mapping the spatial distribution of soil taxonomic classes is important for informing soil use and management decisions. Digital soil mapping (DSM) can quantitatively predict the spatial distribution of soil taxonomic classes. Key components of DSM are the method and the set of environmental covariates used to predict soil classes. Machine learning is a general term for a broad set of statistical modeling techniques. Many different machine learning models have been applied in the literature and there are different approaches for selecting covariates for DSM. However, there is little guidance as to which, if any, machine learning model and covariate set might be optimal for predicting soil classes across different landscapes. Our objective was to compare multiple machine learning models and covariate sets for predicting soil taxonomic classes at three geographically distinct areas in the semi-arid western United States of America (southern New Mexico, southwestern Utah, and northeastern Wyoming). All three areas were the focus of digital soil mapping studies. Sampling sites at each study area were selected using conditioned Latin hypercube sampling (cLHS). We compared models that had been used in other DSM studies, including clustering algorithms, discriminant analysis, multinomial logistic regression, neural networks, tree based methods, and support vector machine classifiers. Tested machine learning models were divided into three groups based on model complexity: simple, moderate, and complex. We also compared environmental covariates derived from digital elevation models and Landsat imagery that were divided into three different sets: 1) covariates selected a priori by soil scientists familiar with each area and used as input into cLHS, 2) the covariates in set 1 plus 113 additional covariates, and 3) covariates selected using recursive feature elimination. Overall, complex models were consistently more accurate than simple or moderately complex models. Random forests (RF) using covariates selected via recursive feature elimination was consistently the most accurate, or was among the most accurate, classifiers between study areas and between covariate sets within each study area. We recommend that for soil taxonomic class prediction, complex models and covariates selected by recursive feature elimination be used. Overall classification accuracy in each study area was largely dependent upon the number of soil taxonomic classes and the frequency distribution of pedon observations between taxonomic classes. Individual subgroup class accuracy was generally dependent upon the number of soil pedon observations in each taxonomic class. The number of soil classes is related to the inherent variability of a given area. The imbalance of soil pedon observations between classes is likely related to cLHS. Imbalanced frequency distributions of soil pedon observations between classes must be addressed to improve model accuracy. Solutions include increasing the number of soil pedon observations in classes with few observations or decreasing the number of classes. Spatial predictions using the most accurate models generally agree with expected soil–landscape relationships. Spatial prediction uncertainty was lowest in areas of relatively low relief for each study area.

  4. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models: the case study of Denmark.

    PubMed

    Bou Kheir, Rania; Greve, Mogens H; Bøcher, Peder K; Greve, Mette B; Larsen, René; McCloy, Keith

    2010-05-01

    Soil organic carbon (SOC) is one of the most important carbon stocks globally and has large potential to affect global climate. Distribution patterns of SOC in Denmark constitute a nation-wide baseline for studies on soil carbon changes (with respect to Kyoto protocol). This paper predicts and maps the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature, plan curvature, profile curvature, flow accumulation, specific catchment area, tangent slope, tangent curvature, steady-state wetness index, Normalized Difference Vegetation Index (NDVI), Normalized Difference Wetness Index (NDWI) and Soil Color Index (SCI) were generated to statistically explain SOC field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME) and the lowest number of nodes (N) as well are: (i) the tree (T1) combining all of the parameters (ME=29.5%; N=54); (ii) the tree (T2) based on the parent material, soil type and landscape type (ME=31.5%; N=14); and (iii) the tree (T3) constructed using parent material, soil type, landscape type, elevation, tangent slope and SCI (ME=30%; N=39). The produced SOC maps at 1:50,000 cartographic scale using these trees are highly matching with coincidence values equal to 90.5% (Map T1/Map T2), 95% (Map T1/Map T3) and 91% (Map T2/Map T3). The overall accuracies of these maps once compared with field observations were estimated to be 69.54% (Map T1), 68.87% (Map T2) and 69.41% (Map T3). The proposed tree models are relatively simple, and may be also applied to other areas. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. Metallogenic belt and mineral deposit maps of northeast Asia

    USGS Publications Warehouse

    Obolenskiy, Alexander A.; Rodionov, Sergey M.; Dejidmaa, Gunchin; Gerel, Ochir; Hwang, Duk-Hwan; Miller, Robert J.; Nokleberg, Warren J.; Ogasawara, Masatsugu; Smelov, Alexander P.; Yan, Hongquan; Seminskiy, Zhan V.

    2013-01-01

    This report contains explanatory material and summary tables for lode mineral deposits and placer districts (Map A, sheet 1) and metallogenic belts of Northeast Asia (Maps B, C, and D on sheets 2, 3, and 4, respectively). The map region includes eastern Siberia, southeastern Russia, Mongolia, northeast China, and Japan. A large group of geologists—members of the joint international project, Major Mineral Deposits, Metallogenesis, and Tectonics of Northeast Asia—prepared the maps, tables, and introductory text. This is a cooperative project with the Russian Academy of Sciences, Mongolian Academy of Sciences, Mongolian National University, Ulaanbaatar, Mongolian Technical University, Mineral Resources Authority of Mongolia, Geological Research Institute, Jilin University, China Geological Survey, Korea Institute of Geoscience and Mineral Resources, Geological Survey of Japan, and U.S. Geological Survey. This report is one of a series of reports on the mineral resources, geodynamics, and metallogenesis of Northeast Asia. Companion studies include (1) a detailed geodynamics map of Northeast Asia (Parfenov and others, 2003); (2) a compilation of major mineral deposit models (Rodionov and Nokleberg, 2000; Rodionov and others, 2000); (3) a series of metallogenic belt maps (Obolenskiy and others, 2004); (4) location map of lode mineral deposits and placer districts of Northeast Asia (Ariunbileg and others, 2003b); (5) descriptions of metallogenic belts (Rodionov and others, 2004); (6) a database on significant metalliferous and selected nonmetalliferous lode deposits and selected placer districts (Ariunbileg and others, 2003a); and (7) a series of summary project publications (Ariunbileg and 74 others, 2003b).

  6. Development and matching of binocular orientation preference in mouse V1.

    PubMed

    Bhaumik, Basabi; Shah, Nishal P

    2014-01-01

    Eye-specific thalamic inputs converge in the primary visual cortex (V1) and form the basis of binocular vision. For normal binocular perceptions, such as depth and stereopsis, binocularly matched orientation preference between the two eyes is required. A critical period of binocular matching of orientation preference in mice during normal development is reported in literature. Using a reaction diffusion model we present the development of RF and orientation selectivity in mouse V1 and investigate the binocular orientation preference matching during the critical period. At the onset of the critical period the preferred orientations of the modeled cells are mostly mismatched in the two eyes and the mismatch decreases and reaches levels reported in juvenile mouse by the end of the critical period. At the end of critical period 39% of cells in binocular zone in our model cortex is orientation selective. In literature around 40% cortical cells are reported as orientation selective in mouse V1. The starting and the closing time for critical period determine the orientation preference alignment between the two eyes and orientation tuning in cortical cells. The absence of near neighbor interaction among cortical cells during the development of thalamo-cortical wiring causes a salt and pepper organization in the orientation preference map in mice. It also results in much lower % of orientation selective cells in mice as compared to ferrets and cats having organized orientation maps with pinwheels.

  7. Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data

    PubMed Central

    Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2017-01-01

    Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as “MCF” or “non-MCF”. This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest’s location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF. PMID:28245279

  8. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  9. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra.

    PubMed

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-03-13

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  10. A continuous scale-space method for the automated placement of spot heights on maps

    NASA Astrophysics Data System (ADS)

    Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico

    2017-12-01

    Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.

  11. NASA Lunar and Planetary Mapping and Modeling

    NASA Astrophysics Data System (ADS)

    Day, B. H.; Law, E.

    2016-12-01

    NASA's Lunar and Planetary Mapping and Modeling Portals provide web-based suites of interactive visualization and analysis tools to enable mission planners, planetary scientists, students, and the general public to access mapped lunar data products from past and current missions for the Moon, Mars, and Vesta. New portals for additional planetary bodies are being planned. This presentation will recap significant enhancements to these toolsets during the past year and look forward to the results of the exciting work currently being undertaken. Additional data products and tools continue to be added to the Lunar Mapping and Modeling Portal (LMMP). These include both generalized products as well as polar data products specifically targeting potential sites for the Resource Prospector mission. Current development work on LMMP also includes facilitating mission planning and data management for lunar CubeSat missions, and working with the NASA Astromaterials Acquisition and Curation Office's Lunar Apollo Sample database in order to help better visualize the geographic contexts from which samples were retrieved. A new user interface provides, among other improvements, significantly enhanced 3D visualizations and navigation. Mars Trek, the project's Mars portal, has now been assigned by NASA's Planetary Science Division to support site selection and analysis for the Mars 2020 Rover mission as well as for the Mars Human Landing Exploration Zone Sites. This effort is concentrating on enhancing Mars Trek with data products and analysis tools specifically requested by the proposing teams for the various sites. Also being given very high priority by NASA Headquarters is Mars Trek's use as a means to directly involve the public in these upcoming missions, letting them explore the areas the agency is focusing upon, understand what makes these sites so fascinating, follow the selection process, and get caught up in the excitement of exploring Mars. The portals also serve as outstanding resources for education and outreach. As such, they have been designated by NASA's Science Mission Directorate as key supporting infrastructure for the new education programs selected through the division's recent CAN.

  12. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  13. The Systematic Development of an Internet-Based Smoking Cessation Intervention for Adults.

    PubMed

    Dalum, Peter; Brandt, Caroline Lyng; Skov-Ettrup, Lise; Tolstrup, Janne; Kok, Gerjo

    2016-07-01

    Objectives The objective of this project was to determine whether intervention mapping is a suitable strategy for developing an Internet- and text message-based smoking cessation intervention. Method We used the Intervention Mapping framework for planning health promotion programs. After a needs assessment, we identified important changeable determinants of cessation behavior, specified objectives for the intervention, selected theoretical methods for meeting our objectives, and operationalized change methods into practical intervention strategies. Results We found that "social cognitive theory," the "transtheoretical model/stages of change," "self-regulation theory," and "appreciative inquiry" were relevant theories for smoking cessation interventions. From these theories, we selected modeling/behavioral journalism, feedback, planning coping responses/if-then statements, gain frame/positive imaging, consciousness-raising, helping relationships, stimulus control, and goal-setting as suitable methods for an Internet- and text-based adult smoking cessation program. Furthermore, we identified computer tailoring as a useful strategy for adapting the intervention to individual users. Conclusion The Intervention Mapping method, with a clear link between behavioral goals, theoretical methods, and practical strategies and materials, proved useful for systematic development of a digital smoking cessation intervention for adults. © 2016 Society for Public Health Education.

  14. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  15. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention. PMID:27873818

  16. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    PubMed

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  17. A spatial model to improve site selection for seagrass restoration in shallow boating environments.

    PubMed

    Hotaling-Hagan, Althea; Swett, Robert; Ellis, L Rex; Frazer, Thomas K

    2017-01-15

    Due to widespread and continuing seagrass loss, restoration attempts occur worldwide. This article presents a geospatial modeling technique that ranks the suitability of sites for restoration based on light availability and boating activity, two factors cited in global studies of seagrass loss and restoration failures. The model presented here was created for Estero Bay, Florida and is a predictive model of light availability and boating pressure to aid seagrass restoration efforts. The model is adaptive and can be parameterized for different locations and updated as additional data is collected and knowledge of how factors impact seagrass improves. Light data used for model development were collected over one year from 50 sites throughout the bay. Coupled with high resolution bathymetric data, bottom mean light availability was predicted throughout the bay. Data collection throughout the year also allowed for prediction of light variability at sites, a possible indicator of seagrass growth and survival. Additionally, survey data on boating activities were used to identify areas, outside of marked navigation channels, that receive substantial boating pressure and are likely poor candidate sites for seagrass restoration. The final map product identifies areas where the light environment was suitable for seagrasses and boating pressure was low. A composite map showing the persistence of seagrass coverage in the study area over four years, between 1999 and 2006, was used to validate the model. Eighty-nine percent of the area where seagrass persisted (had been mapped all four years) was ranked as suitable for restoration: 42% with the highest rank (7), 28% with a rank of 6, and 19% with a rank of 5. The results show that the model is a viable tool for selection of seagrass restoration sites in Florida and elsewhere. With knowledge of the light environment and boating patterns, managers will be better equipped to set seagrass restoration and water quality improvement targets and select sites for restoration. The modeling approach outlined here is broadly applicable and will be of value to a large and diverse suite of scientists and marine resource managers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Rift Valley fever risk map model and seroprevalence in selected wild ungulates and camels from Kenya

    USDA-ARS?s Scientific Manuscript database

    Since the first isolation of Rift Valley fever virus (RVFV) in the 1930s, there have been multiple epizootics and epidemics in animals and humans in sub-Saharan Africa. Prospective climate-based models have recently been developed that flag areas at risk of RVFV transmission in endemic regions based...

  19. Flood susceptibility mapping using novel ensembles of adaptive neuro fuzzy inference system and metaheuristic algorithms.

    PubMed

    Razavi Termeh, Seyed Vahid; Kornejady, Aiding; Pourghasemi, Hamid Reza; Keesstra, Saskia

    2018-02-15

    Flood is one of the most destructive natural disasters which cause great financial and life losses per year. Therefore, producing susceptibility maps for flood management are necessary in order to reduce its harmful effects. The aim of the present study is to map flood hazard over the Jahrom Township in Fars Province using a combination of adaptive neuro-fuzzy inference systems (ANFIS) with different metaheuristics algorithms such as ant colony optimization (ACO), genetic algorithm (GA), and particle swarm optimization (PSO) and comparing their accuracy. A total number of 53 flood locations areas were identified, 35 locations of which were randomly selected in order to model flood susceptibility and the remaining 16 locations were used to validate the models. Learning vector quantization (LVQ), as one of the supervised neural network methods, was employed in order to estimate factors' importance. Nine flood conditioning factors namely: slope degree, plan curvature, altitude, topographic wetness index (TWI), stream power index (SPI), distance from river, land use/land cover, rainfall, and lithology were selected and the corresponding maps were prepared in ArcGIS. The frequency ratio (FR) model was used to assign weights to each class within particular controlling factor, then the weights was transferred into MATLAB software for further analyses and to combine with metaheuristic models. The ANFIS-PSO was found to be the most practical model in term of producing the highly focused flood susceptibility map with lesser spatial distribution related to highly susceptible classes. The chi-square result attests the same, where the ANFIS-PSO had the highest spatial differentiation within flood susceptibility classes over the study area. The area under the curve (AUC) obtained from ROC curve indicated the accuracy of 91.4%, 91.8%, 92.6% and 94.5% for the respective models of FR, ANFIS-ACO, ANFIS-GA, and ANFIS-PSO ensembles. So, the ensemble of ANFIS-PSO was introduced as the premier model in the study area. Furthermore, LVQ results revealed that slope degree, rainfall, and altitude were the most effective factors. As regards the premier model, a total area of 44.74% was recognized as highly susceptible to flooding. The results of this study can be used as a platform for better land use planning in order to manage the highly susceptible zones to flooding and reduce the anticipated losses. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. IntegratedMap: a Web interface for integrating genetic map data.

    PubMed

    Yang, Hongyu; Wang, Hongyu; Gingle, Alan R

    2005-05-01

    IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp

  1. Evaluation of Integrating the Invasive Species Forecasting System to Support National Park Service Decisions on Fire Management Activities and Invasive Plant Species Control

    NASA Technical Reports Server (NTRS)

    Ma, Peter; Morisette, T.; Rodman, Ann; McClure, Craig; Pedelty, Jeff; Benson, Nate; Paintner, Kara; Most, Neal; Ullah, Asad; Cai, Weijie; hide

    2007-01-01

    The USGS and NASA, in conjunction with Colorado State University, George Mason University and other partners, have developed the Invasive Species Forecasting System (ISFS), a flexible tool that capitalizes on NASA's remote sensing resource to produce dynamic habitat maps of invasive terrestrial plant species across the United States. In 2006 ISFS was adopted to generate predictive invasive habitat maps to benefit noxious plant and fire management teams in three major National Park systems: The Greater Yellowstone Area (Yellowstone / Grand Tetons National Parks), Sequoia and Kings Canyon National Park, and interior Alaskan (between Denali, Gates of The Arctic and Yukon-Charley). One of the objectives of this study is to explore how the ISFS enhances decision support apparatus in use by National Park management teams. The first step with each park system was to work closely with park managers to select top-priority invasive species. Specific species were chosen for each study area based on management priorities, availability of observational data, and their potential for invasion after fire disturbances. Once focal species were selected, sources of presence/absence data were collected from previous surveys for each species in and around the Parks. Using logistic regression to couple presence/absence points with environmental data layers, the first round of ISFS habitat suitability maps were generated for each National Park system and presented during park visits over the summer of 2006. This first engagement provided a demonstration of what the park service can expect from ISFS and initiated the ongoing dialog on how the parks can best utilized the system to enhance their decisions related to invasive species control. During the park visits it was discovered that separate "expert opinion" maps would provide a valuable baseline to compare against the ISFS model output. Opinion maps are a means of spatially representing qualitative knowledge into a quantitative two-dimensional map. Furthermore, our approach combines the qualitative expert opinion habitat maps -- with the quantitative ISFS habitat maps in a difference map that shows where the two maps agree and disagree. The objective of the difference map is to help focus future field sampling and improve model results. This paper presents a demonstration of the habitat, expert opinion, and difference map for Yellowstone National Park.

  2. A Foreground Masking Strategy for [C II] Intensity Mapping Experiments Using Galaxies Selected by Stellar Mass and Redshift

    NASA Astrophysics Data System (ADS)

    Sun, G.; Moncelsi, L.; Viero, M. P.; Silva, M. B.; Bock, J.; Bradford, C. M.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A. R.; Crites, A.; Hailey-Dunsheath, S.; Uzgil, B.; Hunacek, J. R.; Zemcov, M.

    2018-04-01

    Intensity mapping provides a unique means to probe the epoch of reionization (EoR), when the neutral intergalactic medium was ionized by energetic photons emitted from the first galaxies. The [C II] 158 μm fine-structure line is typically one of the brightest emission lines of star-forming galaxies and thus a promising tracer of the global EoR star formation activity. However, [C II] intensity maps at 6 ≲ z ≲ 8 are contaminated by interloping CO rotational line emission (3 ≤ J upp ≤ 6) from lower-redshift galaxies. Here we present a strategy to remove the foreground contamination in upcoming [C II] intensity mapping experiments, guided by a model of CO emission from foreground galaxies. The model is based on empirical measurements of the mean and scatter of the total infrared luminosities of galaxies at z < 3 and with stellar masses {M}* > {10}8 {M}ȯ selected in the K-band from the COSMOS/UltraVISTA survey, which can be converted to CO line strengths. For a mock field of the Tomographic Ionized-carbon Mapping Experiment, we find that masking out the “voxels” (spectral–spatial elements) containing foreground galaxies identified using an optimized CO flux threshold results in a z-dependent criterion {m}{{K}}AB}≲ 22 (or {M}* ≳ {10}9 {M}ȯ ) at z < 1 and makes a [C II]/COtot power ratio of ≳10 at k = 0.1 h/Mpc achievable, at the cost of a moderate ≲8% loss of total survey volume.

  3. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  4. Framework for the mapping of the monthly average daily solar radiation using an advanced case-based reasoning and a geostatistical technique.

    PubMed

    Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon

    2014-04-15

    For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation.

  5. Using patient data similarities to predict radiation pneumonitis via a self-organizing map

    NASA Astrophysics Data System (ADS)

    Chen, Shifeng; Zhou, Sumin; Yin, Fang-Fang; Marks, Lawrence B.; Das, Shiva K.

    2008-01-01

    This work investigates the use of the self-organizing map (SOM) technique for predicting lung radiation pneumonitis (RP) risk. SOM is an effective method for projecting and visualizing high-dimensional data in a low-dimensional space (map). By projecting patients with similar data (dose and non-dose factors) onto the same region of the map, commonalities in their outcomes can be visualized and categorized. Once built, the SOM may be used to predict pneumonitis risk by identifying the region of the map that is most similar to a patient's characteristics. Two SOM models were developed from a database of 219 lung cancer patients treated with radiation therapy (34 clinically diagnosed with Grade 2+ pneumonitis). The models were: SOMall built from all dose and non-dose factors and, for comparison, SOMdose built from dose factors alone. Both models were tested using ten-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Models SOMall and SOMdose yielded ten-fold cross-validated ROC areas of 0.73 (sensitivity/specificity = 71%/68%) and 0.67 (sensitivity/specificity = 63%/66%), respectively. The significant difference between the cross-validated ROC areas of these two models (p < 0.05) implies that non-dose features add important information toward predicting RP risk. Among the input features selected by model SOMall, the two with highest impact for increasing RP risk were: (a) higher mean lung dose and (b) chemotherapy prior to radiation therapy. The SOM model developed here may not be extrapolated to treatment techniques outside that used in our database, such as several-field lung intensity modulated radiation therapy or gated radiation therapy.

  6. Black Sea GIS developed in MHI

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Isaeva, E.

    2016-08-01

    The work aims at creating the Black Sea geoinformation system (GIS) and complementing it with a model bank. The software for data access and visualization was developed using client server architecture. A map service based on MapServer and MySQL data management system were chosen for the Black Sea GIS. Php-modules and python-scripts are used to provide data access, processing, and exchange between the client application and the server. According to the basic data types, the module structure of GIS was developed. Each type of data is matched to a module which allows selection and visualization of the data. At present, a GIS complement with a model bank (the models build in to the GIS) and users' models (programs launched on users' PCs but receiving and displaying data via GIS) is developed.

  7. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  8. Cortical connective field estimates from resting state fMRI activity.

    PubMed

    Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W

    2014-01-01

    One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.

  9. Global Maps of Temporal Streamflow Characteristics Based on Observations from Many Small Catchments

    NASA Astrophysics Data System (ADS)

    Beck, H.; van Dijk, A.; de Roo, A.

    2014-12-01

    Streamflow (Q) estimation in ungauged catchments is one of the greatest challenges facing hydrologists. We used observed Q from approximately 7500 small catchments (<10,000 km2) around the globe to train neural network ensembles to estimate temporal Q distribution characteristics from climate and physiographic characteristics of the catchments. In total 17 Q characteristics were selected, including mean annual Q, baseflow index, and a number of flow percentiles. Training coefficients of determination for the estimation of the Q characteristics ranged from 0.56 for the baseflow recession constant to 0.93 for the Q timing. Overall, climate indices dominated among the predictors. Predictors related to soils and geology were the least important, perhaps due to data quality. The trained neural network ensembles were subsequently applied spatially over the ice-free land surface including ungauged regions, resulting in global maps of the Q characteristics (0.125° spatial resolution). These maps possess several unique features: 1) they represent purely observation-driven estimates; 2) are based on an unprecedentedly large set of catchments; and 3) have associated uncertainty estimates. The maps can be used for various hydrological applications, including the diagnosis of macro-scale hydrological models. To demonstrate this, the produced maps were compared to equivalent maps derived from the simulated daily Q of five macro-scale hydrological models, highlighting various opportunities for improvement in model Q behavior. The produced dataset is available for download.

  10. Global maps of streamflow characteristics based on observations from several thousand catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; van Dijk, Albert; de Roo, Ad

    2015-04-01

    Streamflow (Q) estimation in ungauged catchments is one of the greatest challenges facing hydrologists. Observed Q from three to four thousand small-to-medium sized catchments (10-10000 km2) around the globe were used to train neural network ensembles to estimate Q characteristics based on climate and physiographic characteristics of the catchments. In total 17 Q characteristics were selected, including mean annual Q, baseflow index, and a number of flow percentiles. Testing coefficients of determination for the estimation of the Q characteristics ranged from 0.55 for the baseflow recession constant to 0.93 for the Q timing. Overall, climate indices dominated among the predictors. Predictors related to soils and geology were relatively unimportant, perhaps due to their data quality. The trained neural network ensembles were subsequently applied spatially over the entire ice-free land surface, resulting in global maps of the Q characteristics (0.125° resolution). These maps possess several unique features: they represent observation-driven estimates; are based on an unprecedentedly large set of catchments; and have associated uncertainty estimates. The maps can be used for various hydrological applications, including the diagnosis of macro-scale hydrological models. To demonstrate this, the produced maps were compared to equivalent maps derived from the simulated daily Q of four macro-scale hydrological models, highlighting various opportunities for improvement in model Q behavior. The produced dataset is available via http://water.jrc.ec.europa.eu.

  11. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  12. Rainfall induced landslide susceptibility mapping using weight-of-evidence, linear and quadratic discriminant and logistic model tree method

    NASA Astrophysics Data System (ADS)

    Hong, H.; Zhu, A. X.

    2017-12-01

    Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.

  13. Fusing MODIS with Landsat 8 data to downscale weekly normalized difference vegetation index estimates for central Great Basin rangelands, USA

    USGS Publications Warehouse

    Boyte, Stephen; Wylie, Bruce K.; Rigge, Matthew B.; Dahal, Devendra

    2018-01-01

    Data fused from distinct but complementary satellite sensors mitigate tradeoffs that researchers make when selecting between spatial and temporal resolutions of remotely sensed data. We integrated data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the Terra satellite and the Operational Land Imager sensor aboard the Landsat 8 satellite into four regression-tree models and applied those data to a mapping application. This application produced downscaled maps that utilize the 30-m spatial resolution of Landsat in conjunction with daily acquisitions of MODIS normalized difference vegetation index (NDVI) that are composited and temporally smoothed. We produced four weekly, atmospherically corrected, and nearly cloud-free, downscaled 30-m synthetic MODIS NDVI predictions (maps) built from these models. Model results were strong with R2 values ranging from 0.74 to 0.85. The correlation coefficients (r ≥ 0.89) were strong for all predictions when compared to corresponding original MODIS NDVI data. Downscaled products incorporated into independently developed sagebrush ecosystem models yielded mixed results. The visual quality of the downscaled 30-m synthetic MODIS NDVI predictions were remarkable when compared to the original 250-m MODIS NDVI. These 30-m maps improve knowledge of dynamic rangeland seasonal processes in the central Great Basin, United States, and provide land managers improved resource maps.

  14. Flood-inundation maps and wetland restoration suitability index for the Blue River and selected tributaries, Kansas City, Missouri, and vicinity, 2012

    USGS Publications Warehouse

    Heimann, David C.; Kelly, Brian P.; Studley, Seth E.

    2015-01-01

    Additional information in this report includes maps of simulated stream velocity for an 8.2-mile, two-dimensional modeled reach of the Blue River and a Wetland Restoration Suitability Index (WRSI) generated for the study area that was based on hydrologic, topographic, and land-use digital feature layers. The calculated WRSI for the selected flood-plain area ranged from 1 (least suitable for possible wetland mitigation efforts) to 10 (most suitable for possible wetland mitigation efforts). A WRSI of 5 to 10 is most closely associated with existing riparian wetlands in the study area. The WRSI allows for the identification of lands along the Blue River and selected tributaries that are most suitable for restoration or creation of wetlands. Alternatively, the index can be used to identify and avoid disturbances to areas with the highest potential to support healthy sustainable riparian wetlands.

  15. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    PubMed

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Genetic signatures of natural selection in a model invasive ascidian

    NASA Astrophysics Data System (ADS)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-03-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.

  17. Methods to Improve the Selection and Tailoring of Implementation Strategies

    PubMed Central

    Powell, Byron J.; Beidas, Rinad S.; Lewis, Cara C.; Aarons, Gregory A.; McMillen, J. Curtis; Proctor, Enola K.; Khinduka, Shanti K.; Mandell, David S.

    2015-01-01

    Implementing behavioral health interventions is a complicated process. It has been suggested that implementation strategies should be selected and tailored to address the contextual needs of a given change effort; however, there is limited guidance as to how to do this. This article proposes four methods (concept mapping, group model building, conjoint analysis, and intervention mapping) that could be used to match implementation strategies to identified barriers and facilitators for a particular evidence-based practice or process change being implemented in a given setting. Each method is reviewed, examples of their use are provided, and their strengths and weaknesses are discussed. The discussion includes suggestions for future research pertaining to implementation strategies and highlights these methods' relevance to behavioral health services and research. PMID:26289563

  18. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.

  19. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    NASA Astrophysics Data System (ADS)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Greenland's Mineral Resources Administration (MRA) plans a series of licensing rounds off western Greenland. Meanwhile, the MRA has declared the Jameson Land basin of east central Greenland as open acreage. Greenland Geological Survey (GGU), Copenhagen, has prepared a report on the geographical conditions, logistics, exploration history, and geological development of Jameson Land. The article emphasizes source and reservoir rocks, conceptual play types with six seismic examples, and thermal history with basin modeling. It also includes two interpreted regional seismic lines, a geological and an aeromagnetic map, depth structure, and isopach maps of selected formations.

  1. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average OC content predictions for each land cover class compared well between models, with our model always showing smaller standard deviations. We concluded that the chosen model and covariates are appropriate for the prediction of OC content in European mineral soils. We presented in this work the first map of topsoil OC content at European scale based on a harmonised soil dataset. The associated uncertainty map shall support the end-users in a careful use of the predictions.

  2. Development of a model of the tobacco industry's interference with tobacco control programmes

    PubMed Central

    Trochim, W; Stillman, F; Clark, P; Schmitt, C

    2003-01-01

    Objective: To construct a conceptual model of tobacco industry tactics to undermine tobacco control programmes for the purposes of: (1) developing measures to evaluate industry tactics, (2) improving tobacco control planning, and (3) supplementing current or future frameworks used to classify and analyse tobacco industry documents. Design: Web based concept mapping was conducted, including expert brainstorming, sorting, and rating of statements describing industry tactics. Statistical analyses used multidimensional scaling and cluster analysis. Interpretation of the resulting maps was accomplished by an expert panel during a face-to-face meeting. Subjects: 34 experts, selected because of their previous encounters with industry resistance or because of their research into industry tactics, took part in some or all phases of the project. Results: Maps with eight non-overlapping clusters in two dimensional space were developed, with importance ratings of the statements and clusters. Cluster and quadrant labels were agreed upon by the experts. Conclusions: The conceptual maps summarise the tactics used by the industry and their relationships to each other, and suggest a possible hierarchy for measures that can be used in statistical modelling of industry tactics and for review of industry documents. Finally, the maps enable hypothesis of a likely progression of industry reactions as public health programmes become more successful, and therefore more threatening to industry profits. PMID:12773723

  3. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  4. Potential assessment of genome-wide association study and genomic selection in Japanese pear Pyrus pyrifolia

    PubMed Central

    Iwata, Hiroyoshi; Hayashi, Takeshi; Terakami, Shingo; Takada, Norio; Sawamura, Yutaka; Yamamoto, Toshiya

    2013-01-01

    Although the potential of marker-assisted selection (MAS) in fruit tree breeding has been reported, bi-parental QTL mapping before MAS has hindered the introduction of MAS to fruit tree breeding programs. Genome-wide association studies (GWAS) are an alternative to bi-parental QTL mapping in long-lived perennials. Selection based on genomic predictions of breeding values (genomic selection: GS) is another alternative for MAS. This study examined the potential of GWAS and GS in pear breeding with 76 Japanese pear cultivars to detect significant associations of 162 markers with nine agronomic traits. We applied multilocus Bayesian models accounting for ordinal categorical phenotypes for GWAS and GS model training. Significant associations were detected at harvest time, black spot resistance and the number of spurs and two of the associations were closely linked to known loci. Genome-wide predictions for GS were accurate at the highest level (0.75) in harvest time, at medium levels (0.38–0.61) in resistance to black spot, firmness of flesh, fruit shape in longitudinal section, fruit size, acid content and number of spurs and at low levels (<0.2) in all soluble solid content and vigor of tree. Results suggest the potential of GWAS and GS for use in future breeding programs in Japanese pear. PMID:23641189

  5. Soil maps as data input for soil erosion models: errors related to map scales

    NASA Astrophysics Data System (ADS)

    van Dijk, Paul; Sauter, Joëlle; Hofstetter, Elodie

    2010-05-01

    Soil erosion rates depend in many ways on soil and soil surface characteristics which vary in space and in time. To account for spatial variations of soil features, most distributed soil erosion models require data input derived from soil maps. Ideally, the level of spatial detail contained in the applied soil map should correspond to the objective of the modelling study. However, often the model user has only one soil map available which is then applied without questioning its suitability. The present study seeks to determine in how far soil map scale can be a source of error in erosion model output. The study was conducted on two different spatial scales, with for each of them a convenient soil erosion model: a) the catchment scale using the physically-based Limbourg Soil Erosion Model (LISEM), and b) the regional scale using the decision-tree expert model MESALES. The suitability of the applied soil map was evaluated with respect to an imaginary though realistic study objective for both models: the definition of erosion control measures at strategic locations at the catchment scale; the identification of target areas for the definition of control measures strategies at the regional scale. Two catchments were selected to test the sensitivity of LISEM to the spatial detail contained in soil maps: one catchment with relatively little contrast in soil texture, dominated by loess-derived soil (south of the Alsace), and one catchment with strongly contrasted soils at the limit between the Alsatian piedmont and the loess-covered hills of the Kochersberg. LISEM was run for both catchments using different soil maps ranging in scale from 1/25 000 to 1/100 000 to derive soil related input parameters. The comparison of the output differences was used to quantify the map scale impact on the quality of the model output. The sensitivity of MESALES was tested on the Haut-Rhin county for which two soil maps are available for comparison: 1/50 000 and 1/100 000. The order of resulting target areas (communes) was compared to evaluate the error induced by using the coarser soil data at 1/100 000. Results shows that both models are sensitive to the soil map scale used for model data input. A low sensitivity was found for the catchment with relatively homogeneous soil textures and the use of 1/100 000 soil maps seems allowed. The results for the catchment with strong soil texture variations showed significant differences depending on soil map scale on 75% of the catchment area. Here, the use of 1/100 000 soil map will indeed lead to wrong erosion diagnostics and will hamper the definition of a sound erosion control strategy. The regional scale model MESALES proved to be very sensitive to soil information. The two soil related model parameters (crusting sensitivity, and soil erodibility) reacted very often in the same direction therewith amplifying the change in the final erosion hazard class. The 1/100 000 soil map yielded different results on 40% of the sloping area compared to the 1/50 000 map. Significant differences in the order of target areas were found as well. The present study shows that the degree of sensitivity of the model output to soil map scale is rather variable and depends partly on the spatial variability of soil texture within the study area. Soil (textural) diversity needs to be accounted for to assure a fruitful use of soil erosion models. In some situations this might imply that additional soil data need to be collected in the field to refine the available soil map.

  6. A second generation genetic linkage map of Japanese flounder (Paralichthys olivaceus)

    PubMed Central

    2010-01-01

    Background Japanese flounder (Paralichthys olivaceus) is one of the most economically important marine species in Northeast Asia. Information on genetic markers associated with quantitative trait loci (QTL) can be used in breeding programs to identify and select individuals carrying desired traits. Commercial production of Japanese flounder could be increased by developing disease-resistant fish and improving commercially important traits. Previous maps have been constructed with AFLP markers and a limited number of microsatellite markers. In this study, improved genetic linkage maps are presented. In contrast with previous studies, these maps were built mainly with a large number of codominant markers so they can potentially be used to analyze different families and populations. Results Sex-specific genetic linkage maps were constructed for the Japanese flounder including a total of 1,375 markers [1,268 microsatellites, 105 single nucleotide polymorphisms (SNPs) and two genes]; 1,167 markers are linked to the male map and 1,067 markers are linked to the female map. The lengths of the male and female maps are 1,147.7 cM and 833.8 cM, respectively. Based on estimations of map lengths, the female and male maps covered 79 and 82% of the genome, respectively. Recombination ratio in the new maps revealed F:M of 1:0.7. All linkage groups in the maps presented large differences in the location of sex-specific recombination hot-spots. Conclusions The improved genetic linkage maps are very useful for QTL analyses and marker-assisted selection (MAS) breeding programs for economically important traits in Japanese flounder. In addition, SNP flanking sequences were blasted against Tetraodon nigroviridis (puffer fish) and Danio rerio (zebrafish), and synteny analysis has been carried out. The ability to detect synteny among species or genera based on homology analysis of SNP flanking sequences may provide opportunities to complement initial QTL experiments with candidate gene approaches from homologous chromosomal locations identified in related model organisms. PMID:20937088

  7. Integration of Web-based and PC-based clinical research databases.

    PubMed

    Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M

    2004-01-01

    We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.

  8. Groundwater vulnerability to pollution mapping of Ranchi district using GIS

    NASA Astrophysics Data System (ADS)

    Krishna, R.; Iqbal, J.; Gorai, A. K.; Pathak, G.; Tuluri, F.; Tchounwou, P. B.

    2015-12-01

    Groundwater pollution due to anthropogenic activities is one of the major environmental problems in urban and industrial areas. The present study demonstrates the integrated approach with GIS and DRASTIC model to derive a groundwater vulnerability to pollution map. The model considers the seven hydrogeological factors [Depth to water table ( D), net recharge ( R), aquifer media ( A), soil media ( S), topography or slope ( T), impact of vadose zone ( I) and hydraulic Conductivity( C)] for generating the groundwater vulnerability to pollution map. The model was applied for assessing the groundwater vulnerability to pollution in Ranchi district, Jharkhand, India. The model was validated by comparing the model output (vulnerability indices) with the observed nitrate concentrations in groundwater in the study area. The reason behind the selection of nitrate is that the major sources of nitrate in groundwater are anthropogenic in nature. Groundwater samples were collected from 30 wells/tube wells distributed in the study area. The samples were analyzed in the laboratory for measuring the nitrate concentrations in groundwater. A sensitivity analysis of the integrated model was performed to evaluate the influence of single parameters on groundwater vulnerability index. New weights were computed for each input parameters to understand the influence of individual hydrogeological factors in vulnerability indices in the study area. Aquifer vulnerability maps generated in this study can be used for environmental planning and groundwater management.

  9. Groundwater vulnerability to pollution mapping of Ranchi district using GIS.

    PubMed

    Krishna, R; Iqbal, J; Gorai, A K; Pathak, G; Tuluri, F; Tchounwou, P B

    2015-12-01

    Groundwater pollution due to anthropogenic activities is one of the major environmental problems in urban and industrial areas. The present study demonstrates the integrated approach with GIS and DRASTIC model to derive a groundwater vulnerability to pollution map. The model considers the seven hydrogeological factors [Depth to water table ( D ), net recharge ( R ), aquifer media ( A ), soil media ( S ), topography or slope ( T ), impact of vadose zone ( I ) and hydraulic Conductivity( C )] for generating the groundwater vulnerability to pollution map. The model was applied for assessing the groundwater vulnerability to pollution in Ranchi district, Jharkhand, India. The model was validated by comparing the model output (vulnerability indices) with the observed nitrate concentrations in groundwater in the study area. The reason behind the selection of nitrate is that the major sources of nitrate in groundwater are anthropogenic in nature. Groundwater samples were collected from 30 wells/tube wells distributed in the study area. The samples were analyzed in the laboratory for measuring the nitrate concentrations in groundwater. A sensitivity analysis of the integrated model was performed to evaluate the influence of single parameters on groundwater vulnerability index. New weights were computed for each input parameters to understand the influence of individual hydrogeological factors in vulnerability indices in the study area. Aquifer vulnerability maps generated in this study can be used for environmental planning and groundwater management.

  10. Binding site exploration of CCR5 using in silico methodologies: a 3D-QSAR approach.

    PubMed

    Gadhe, Changdev G; Kothandan, Gugan; Cho, Seung Joo

    2013-01-01

    Chemokine receptor 5 (CCR5) is an important receptor used by human immunodeficiency virus type 1 (HIV-1) to gain viral entry into host cell. In this study, we used a combined approach of comparative modeling, molecular docking, and three dimensional quantitative structure activity relationship (3D-QSAR) analyses to elucidate detailed interaction of CCR5 with their inhibitors. Docking study of the most potent inhibitor from a series of compounds was done to derive the bioactive conformation. Parameters such as random selection, rational selection, different charges and grid spacing were utilized in the model development to check their performance on the model predictivity. Final comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models were chosen based on the rational selection method, Gasteiger-Hückel charges and a grid spacing of 0.5 Å. Rational model for CoMFA (q(2) = 0.722, r(2) = 0.884, Q(2) = 0.669) and CoMSIA (q(2) = 0.712, r(2) = 0.825, Q(2) = 0.522) was obtained with good statistics. Mapping of contour maps onto CCR5 interface led us to better understand of the ligand-protein interaction. Docking analysis revealed that the Glu283 is crucial for interaction. Two new amino acid residues, Tyr89 and Thr167 were identified as important in ligand-protein interaction. No site directed mutagenesis studies on these residues have been reported.

  11. Chip level modeling of LSI devices

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1984-01-01

    The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.

  12. Landslide susceptibility mapping using a neuro-fuzzy

    NASA Astrophysics Data System (ADS)

    Lee, S.; Choi, J.; Oh, H.

    2009-12-01

    This paper develops and applied an adaptive neuro-fuzzy inference system (ANFIS) based on a geographic information system (GIS) environment using landslide-related factors and location for landslide susceptibility mapping. A neuro-fuzzy system is based on a fuzzy system that is trained by a learning algorithm derived from the neural network theory. The learning procedure operates on local information, and causes only local modifications in the underlying fuzzy system. The study area, Boun, suffered much damage following heavy rain in 1998 and was selected as a suitable site for the evaluation of the frequency and distribution of landslides. Boun is located in the central part of Korea. Landslide-related factors such as slope, soil texture, wood type, lithology, and density of lineament were extracted from topographic, soil, forest, and lineament maps. Landslide locations were identified from interpretation of aerial photographs and field surveys. Landslide-susceptible areas were analyzed by the ANFIS method and mapped using occurrence factors. In particular, we applied various membership functions (MFs) and analysis results were verified using the landslide location data. The predictive maps using triangular, trapezoidal, and polynomial MFs were the best individual MFs for modeling landslide susceptibility maps (84.96% accuracy), proving that ANFIS could be very effective in modeling landslide susceptibility mapping. Various MFs were used in this study, and after verification, the difference in accuracy according to the MFs was small, between 84.81% and 84.96%. The difference was just 0.15% and therefore the choice of MFs was not important in the study. Also, compared with the likelihood ratio model, which showed 84.94%, the accuracy was similar. Thus, the ANFIS could be applied to other study areas with different data and other study methods such as cross-validation. The developed ANFIS learns the if-then rules between landslide-related factors and landslide location for generalization and prediction. It is easy to understand and interpret, therefore it is a good choice for modeling landslide susceptibility mapping, which are also of great help for planners and engineers in selecting highly susceptible areas for further detail surveys and suitable locations to implement development. Although they may be less useful at the site-specific scale, where local geological and geographic heterogeneities may prevail, the results herein may be used as basic data to assist slope management and land use planning. For the method to be more generally applied, more landslide data are needed and more case studies should be conducted.

  13. Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan

    2017-04-01

    The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.

  14. Predictive Multiple Model Switching Control with the Self-Organizing Map

    NASA Technical Reports Server (NTRS)

    Motter, Mark A.

    2000-01-01

    A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.

  15. The genetic architecture of Drosophila sensory bristle number.

    PubMed Central

    Dilda, Christy L; Mackay, Trudy F C

    2002-01-01

    We have mapped quantitative trait loci (QTL) for Drosophila mechanosensory bristle number in six recombinant isogenic line (RIL) mapping populations, each of which was derived from an isogenic chromosome extracted from a line selected for high or low, sternopleural or abdominal bristle number and an isogenic wild-type chromosome. All RILs were evaluated as male and female F(1) progeny of crosses to both the selected and the wild-type parental chromosomes at three developmental temperatures (18 degrees, 25 degrees, and 28 degrees ). QTL for bristle number were mapped separately for each chromosome, trait, and environment by linkage to roo transposable element marker loci, using composite interval mapping. A total of 53 QTL were detected, of which 33 affected sternopleural bristle number, 31 affected abdominal bristle number, and 11 affected both traits. The effects of most QTL were conditional on sex (27%), temperature (14%), or both sex and temperature (30%). Epistatic interactions between QTL were also common. While many QTL mapped to the same location as candidate bristle development loci, several QTL regions did not encompass obvious candidate genes. These features are germane to evolutionary models for the maintenance of genetic variation for quantitative traits, but complicate efforts to understand the molecular genetic basis of variation for complex traits. PMID:12524340

  16. Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps

    PubMed Central

    Jacobs, Guy S.; Sluckin, Timothy J.; Kivisild, Toomas

    2016-01-01

    During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly’s ZnS and ωmax) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly’s ZnS offers high power, but is outperformed by a novel statistic that we test, which we call Zα. We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics—ωmax, Kelly’s ZnS, and Zα—are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While ωmax replicates most candidates when recombination map data are not available, the ZnS and Zα statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available. PMID:27516617

  17. Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps.

    PubMed

    Jacobs, Guy S; Sluckin, Tim J; Kivisild, Toomas

    2016-08-01

    During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly's [Formula: see text] and [Formula: see text]) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly's [Formula: see text] offers high power, but is outperformed by a novel statistic that we test, which we call [Formula: see text] We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics-[Formula: see text] Kelly's [Formula: see text] and [Formula: see text]-are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While [Formula: see text] replicates most candidates when recombination map data are not available, the [Formula: see text] and [Formula: see text] statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available. Copyright © 2016 by the Genetics Society of America.

  18. A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers.

    PubMed

    Varela, Sara; Lima-Ribeiro, Matheus S; Terribile, Levi Carina

    2015-01-01

    Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12-BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/.

  19. A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers

    PubMed Central

    Varela, Sara; Lima-Ribeiro, Matheus S.; Terribile, Levi Carina

    2015-01-01

    Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12- BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/. PMID:26068930

  20. TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Wu, Q.J.; Yin, F

    2014-06-15

    Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into fivemore » groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less

  1. Dispersal Ecology Informs Design of Large-Scale Wildlife Corridors.

    PubMed

    Benz, Robin A; Boyce, Mark S; Thurfjell, Henrik; Paton, Dale G; Musiani, Marco; Dormann, Carsten F; Ciuti, Simone

    Landscape connectivity describes how the movement of animals relates to landscape structure. The way in which movement among populations is affected by environmental conditions is important for predicting the effects of habitat fragmentation, and for defining conservation corridors. One approach has been to map resistance surfaces to characterize how environmental variables affect animal movement, and to use these surfaces to model connectivity. However, current connectivity modelling typically uses information on species location or habitat preference rather than movement, which unfortunately may not capture dispersal limitations. Here we emphasize the importance of implementing dispersal ecology into landscape connectivity, i.e., observing patterns of habitat selection by dispersers during different phases of new areas' colonization to infer habitat connectivity. Disperser animals undertake a complex sequence of movements concatenated over time and strictly dependent on species ecology. Using satellite telemetry, we investigated the movement ecology of 54 young male elk Cervus elaphus, which commonly disperse, to design a corridor network across the Northern Rocky Mountains. Winter residency period is often followed by a spring-summer movement phase, when young elk migrate with mothers' groups to summering areas, and by a further dispersal bout performed alone to a novel summer area. After another summer residency phase, dispersers usually undertake a final autumnal movement to reach novel wintering areas. We used resource selection functions to identify winter and summer habitats selected by elk during residency phases. We then extracted movements undertaken during spring to move from winter to summer areas, and during autumn to move from summer to winter areas, and modelled them using step selection functions. We built friction surfaces, merged the different movement phases, and eventually mapped least-cost corridors. We showed an application of this tool by creating a scenario with movement predicted as there were no roads, and mapping highways' segments impeding elk connectivity.

  2. Dispersal Ecology Informs Design of Large-Scale Wildlife Corridors

    PubMed Central

    Benz, Robin A.; Boyce, Mark S.; Thurfjell, Henrik; Paton, Dale G.; Musiani, Marco; Dormann, Carsten F.; Ciuti, Simone

    2016-01-01

    Landscape connectivity describes how the movement of animals relates to landscape structure. The way in which movement among populations is affected by environmental conditions is important for predicting the effects of habitat fragmentation, and for defining conservation corridors. One approach has been to map resistance surfaces to characterize how environmental variables affect animal movement, and to use these surfaces to model connectivity. However, current connectivity modelling typically uses information on species location or habitat preference rather than movement, which unfortunately may not capture dispersal limitations. Here we emphasize the importance of implementing dispersal ecology into landscape connectivity, i.e., observing patterns of habitat selection by dispersers during different phases of new areas’ colonization to infer habitat connectivity. Disperser animals undertake a complex sequence of movements concatenated over time and strictly dependent on species ecology. Using satellite telemetry, we investigated the movement ecology of 54 young male elk Cervus elaphus, which commonly disperse, to design a corridor network across the Northern Rocky Mountains. Winter residency period is often followed by a spring-summer movement phase, when young elk migrate with mothers’ groups to summering areas, and by a further dispersal bout performed alone to a novel summer area. After another summer residency phase, dispersers usually undertake a final autumnal movement to reach novel wintering areas. We used resource selection functions to identify winter and summer habitats selected by elk during residency phases. We then extracted movements undertaken during spring to move from winter to summer areas, and during autumn to move from summer to winter areas, and modelled them using step selection functions. We built friction surfaces, merged the different movement phases, and eventually mapped least-cost corridors. We showed an application of this tool by creating a scenario with movement predicted as there were no roads, and mapping highways’ segments impeding elk connectivity. PMID:27657496

  3. Ion Trapping with Fast-Response Ion-Selective Microelectrodes Enhances Detection of Extracellular Ion Channel Gradients

    PubMed Central

    Messerli, Mark A.; Collis, Leon P.; Smith, Peter J.S.

    2009-01-01

    Previously, functional mapping of channels has been achieved by measuring the passage of net charge and of specific ions with electrophysiological and intracellular fluorescence imaging techniques. However, functional mapping of ion channels using extracellular ion-selective microelectrodes has distinct advantages over the former methods. We have developed this method through measurement of extracellular K+ gradients caused by efflux through Ca2+-activated K+ channels expressed in Chinese hamster ovary cells. We report that electrodes constructed with short columns of a mechanically stable K+-selective liquid membrane respond quickly and measure changes in local [K+] consistent with a diffusion model. When used in close proximity to the plasma membrane (<4 μm), the ISMs pose a barrier to simple diffusion, creating an ion trap. The ion trap amplifies the local change in [K+] without dramatically changing the rise or fall time of the [K+] profile. Measurement of extracellular K+ gradients from activated rSlo channels shows that rapid events, 10–55 ms, can be characterized. This method provides a noninvasive means for functional mapping of channel location and density as well as for characterizing the properties of ion channels in the plasma membrane. PMID:19217875

  4. Determining the Suitability of Different Digital Elevation Models and Satellite Images for Fancy Maps. An Example of Cyprus

    NASA Astrophysics Data System (ADS)

    Drachal, J.; Kawel, A. K.

    2016-06-01

    The article describes the possibility of developing an overall map of the selected area on the basis of publicly available data. Such a map would take the form designed by the author with the colors that meets his expectations and a content, which he considers to be appropriate. Among the data available it was considered the use of satellite images of the terrain in real colors and, in the form of shaded relief, digital terrain models with different resolutions of the terrain mesh. Specifically the considered data were: MODIS, Landsat 8, GTOPO-30, SRTM-30, SRTM-1, SRTM-3, ASTER. For the test area the island of Cyprus was chosen because of the importance in tourism, a relatively small area and a clearly defined boundary. In the paper there are shown and discussed various options of the Cyprus terrain image obtained synthetically from variants of Modis, Landsat and digital elevation models of different resolutions.

  5. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation by determining high and very high groundwater potential zones.

  6. Action-Based Dynamical Modelling For The Milky Way Disk

    NASA Astrophysics Data System (ADS)

    Trick, Wilma; Rix, Hans-Walter; Bovy, Jo

    2016-09-01

    We present Road Mapping, a full-likelihood dynamical modelling machinery, that aims to recover the Milky Way's (MW) gravitational potential from large samples of stars in the Galactic disk. Road Mapping models the observed positions and velocities of stars with a parameterized, action-based distribution function (DF) in a parameterized axisymmetric gravitational potential (Binney & McMillan 2011, Binney 2012, Bovy & Rix 2013).In anticipation of the Gaia data release in autumn, we have fully tested Road Mapping and demonstrated its robustness against the breakdown of its assumptions.Using large suites of mock data, we investigated in isolated test cases how the modelling would be affected if the data's true potential or DF was not included in the families of potentials and DFs assumed by Road Mapping, or if we misjudged measurement errors or the spatial selection function (SF) (Trick et al., submitted to ApJ). We found that the potential can be robustly recovered — given the limitations of the assumed potential model—, even for minor misjudgments in DF or SF, or for proper motion errors or distances known to within 10%.We were also able to demonstrate that Road Mapping is still successful if the strong assumption of axisymmetric breaks down (Trick et al., in preparation). Data drawn from a highresolution simulation (D'Onghia et al. 2013) of a MW-like galaxy with pronounced spiral arms does neither follow the assumed simple DF, nor does it come from an axisymmetric potential. We found that as long as the survey volume is large enough, Road Mapping gives good average constraints on the galaxy's potential.We are planning to apply Road Mapping to a real data set — the Tycho-2 catalogue (Hog et al. 2000) —very soon, and might be able to present some preliminary results already at the conference.

  7. Issues of tsunami hazard maps revealed by the 2011 Tohoku tsunami

    NASA Astrophysics Data System (ADS)

    Sugimoto, M.

    2013-12-01

    Tsunami scientists are imposed responsibilities of selection for people's tsunami evacuation place after the 2011 Tohoku Tsunami in Japan. A lot of matured people died out of tsunami hazard zone based on tsunami hazard map though students made a miracle by evacuation on their own judgment in Kamaishi city. Tsunami hazard maps were based on numerical model smaller than actual magnitude 9. How can we bridge the gap between hazard map and future disasters? We have to discuss about using tsunami numerical model better enough to contribute tsunami hazard map. How do we have to improve tsunami hazard map? Tsunami hazard map should be revised included possibility of upthrust or downthrust after earthquakes and social information. Ground sank 1.14m below sea level in Ayukawa town, Tohoku. Ministry of Land, Infrastructure, Transport and Tourism's research shows around 10% people know about tsunami hazard map in Japan. However, people know about their evacuation places (buildings) through experienced drills once a year even though most people did not know about tsunami hazard map. We need wider spread of tsunami hazard with contingency of science (See the botom disaster handbook material's URL). California Emergency Management Agency (CEMA) team practically shows one good practice and solution to me. I followed their field trip in Catalina Island, California in Sep 2011. A team members are multidisciplinary specialists: A geologist, a GIS specialist, oceanographers in USC (tsunami numerical modeler) and a private company, a local policeman, a disaster manager, a local authority and so on. They check field based on their own specialties. They conduct an on-the-spot inspection of ambiguous locations between tsunami numerical model and real field conditions today. The data always become older. They pay attention not only to topographical conditions but also to social conditions: vulnerable people, elementary schools and so on. It takes a long time to check such field information, however tsunami hazard map based on numerical model should be this process. Tsunami scientists should not enter into the inhumane business by using tsunami numerical model. It includes accountability to society therefore scientists need scientific ethics and humanitarian attention. Should only tsunami scientist have responsibility for human life? Multidisciplinary approach is essential for mitigation like CEMA. I am taking on hazard map training course for disaster management officers from developing countries in JICA training course. I would like to discuss how to improve tsunami hazard map after the 2011 Tohoku tsunami experience in this presentation. A multidisciplinary exparts team of CEMA's tsunami hazard map

  8. Integrating Evolutionary Game Theory into Mechanistic Genotype-Phenotype Mapping.

    PubMed

    Zhu, Xuli; Jiang, Libo; Ye, Meixia; Sun, Lidan; Gragnoli, Claudia; Wu, Rongling

    2016-05-01

    Natural selection has shaped the evolution of organisms toward optimizing their structural and functional design. However, how this universal principle can enhance genotype-phenotype mapping of quantitative traits has remained unexplored. Here we show that the integration of this principle and functional mapping through evolutionary game theory gains new insight into the genetic architecture of complex traits. By viewing phenotype formation as an evolutionary system, we formulate mathematical equations to model the ecological mechanisms that drive the interaction and coordination of its constituent components toward population dynamics and stability. Functional mapping provides a procedure for estimating the genetic parameters that specify the dynamic relationship of competition and cooperation and predicting how genes mediate the evolution of this relationship during trait formation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Development and matching of binocular orientation preference in mouse V1

    PubMed Central

    Bhaumik, Basabi; Shah, Nishal P.

    2014-01-01

    Eye-specific thalamic inputs converge in the primary visual cortex (V1) and form the basis of binocular vision. For normal binocular perceptions, such as depth and stereopsis, binocularly matched orientation preference between the two eyes is required. A critical period of binocular matching of orientation preference in mice during normal development is reported in literature. Using a reaction diffusion model we present the development of RF and orientation selectivity in mouse V1 and investigate the binocular orientation preference matching during the critical period. At the onset of the critical period the preferred orientations of the modeled cells are mostly mismatched in the two eyes and the mismatch decreases and reaches levels reported in juvenile mouse by the end of the critical period. At the end of critical period 39% of cells in binocular zone in our model cortex is orientation selective. In literature around 40% cortical cells are reported as orientation selective in mouse V1. The starting and the closing time for critical period determine the orientation preference alignment between the two eyes and orientation tuning in cortical cells. The absence of near neighbor interaction among cortical cells during the development of thalamo-cortical wiring causes a salt and pepper organization in the orientation preference map in mice. It also results in much lower % of orientation selective cells in mice as compared to ferrets and cats having organized orientation maps with pinwheels. PMID:25104927

  10. Chinese HJ-1C SAR And Its Wind Mapping Capability

    NASA Astrophysics Data System (ADS)

    Huang, Weigen; Chen, Fengfeng; Yang, Jingsong; Fu, Bin; Chen, Peng; Zhang, Chan

    2010-04-01

    Chinese Huan Jing (HJ)-1C synthetic aperture radar (SAR) satellite has been planed to be launched in 2010. HJ-1C satellite will fly in a sun-synchronous polar orbit of 500-km altitude. SAR will be the only sensor on board the satellite. It operates in S band with VV polarization. Its image mode has the incidence angles 25°and 47°at the near and far sides of the swath respectively. There are two selectable SAR modes of operation, which are fine resolution beams and standard beams respectively. The sea surface wind mapping capability of the SAR has been examined using M4S radar imaging model developed by Romeiser. The model is based on Bragg scattering theory in a composite surface model expansion. It accounts for contributions of the full ocean wave spectrum to the radar backscatter from ocean surface. The model reproduces absolute normalized radar cross section (NRCS) values for wide ranges of wind speeds. The model results of HJ-1C SAR have been compared with the model results of Envisat ASAR. It shows that HJ-1C SAR is as good as Envisat ASAR at sea surface wind mapping.

  11. Design of a High Resolution Open Access Global Snow Cover Web Map Service Using Ground and Satellite Observations

    NASA Astrophysics Data System (ADS)

    Kadlec, J.; Ames, D. P.

    2014-12-01

    The aim of the presented work is creating a freely accessible, dynamic and re-usable snow cover map of the world by combining snow extent and snow depth datasets from multiple sources. The examined data sources are: remote sensing datasets (MODIS, CryoLand), weather forecasting model outputs (OpenWeatherMap, forecast.io), ground observation networks (CUAHSI HIS, GSOD, GHCN, and selected national networks), and user-contributed snow reports on social networks (cross-country and backcountry skiing trip reports). For adding each type of dataset, an interface and an adapter is created. Each adapter supports queries by area, time range, or combination of area and time range. The combined dataset is published as an online snow cover mapping service. This web service lowers the learning curve that is required to view, access, and analyze snow depth maps and snow time-series. All data published by this service are licensed as open data; encouraging the re-use of the data in customized applications in climatology, hydrology, sports and other disciplines. The initial version of the interactive snow map is on the website snow.hydrodata.org. This website supports the view by time and view by site. In view by time, the spatial distribution of snow for a selected area and time period is shown. In view by site, the time-series charts of snow depth at a selected location is displayed. All snow extent and snow depth map layers and time series are accessible and discoverable through internationally approved protocols including WMS, WFS, WCS, WaterOneFlow and WaterML. Therefore they can also be easily added to GIS software or 3rd-party web map applications. The central hypothesis driving this research is that the integration of user contributed data and/or social-network derived snow data together with other open access data sources will result in more accurate and higher resolution - and hence more useful snow cover maps than satellite data or government agency produced data by itself.

  12. Potential and limits to unravel the genetic architecture and predict the variation of Fusarium head blight resistance in European winter wheat (Triticum aestivum L.).

    PubMed

    Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C

    2015-03-01

    Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.

  13. Object detection system based on multimodel saliency maps

    NASA Astrophysics Data System (ADS)

    Guo, Ya'nan; Luo, Chongfan; Ma, Yide

    2017-03-01

    Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn's neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F-measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation EHA reaches to 215.287. We deem our method can be wielded to diverse applications in the future.

  14. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  15. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    PubMed

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  16. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  17. Robustness of risk maps and survey networks to knowledge gaps about a new invasive pest.

    PubMed

    Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Smith, William D

    2010-02-01

    In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk-ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism. We generate risk maps with a spatial model of invasion that simulates potential entries of an invasive pest via international marine shipments, their spread through a landscape, and establishment on a susceptible host. In particular, we focus on the question of how much uncertainty in risk model assumptions can be tolerated before the risk map loses its value. We outline this approach with an example of a forest pest recently detected in North America, Sirex noctilio Fabricius. The results provide a spatial representation of the robustness of predictions of S. noctilio invasion risk to uncertainty and show major geographic hotspots where the consideration of uncertainty in model parameters may change management decisions about a new invasive pest. We then illustrate how the dependency between the extent of uncertainties and the degree of robustness of a risk map can be used to select a surveillance network design that is most robust to knowledge gaps about the pest.

  18. Global multiresolution models of surface wave propagation: comparing equivalently regularized Born and ray theoretical solutions

    NASA Astrophysics Data System (ADS)

    Boschi, Lapo

    2006-10-01

    I invert a large set of teleseismic phase-anomaly observations, to derive tomographic maps of fundamental-mode surface wave phase velocity, first via ray theory, then accounting for finite-frequency effects through scattering theory, in the far-field approximation and neglecting mode coupling. I make use of a multiple-resolution pixel parametrization which, in the assumption of sufficient data coverage, should be adequate to represent strongly oscillatory Fréchet kernels. The parametrization is finer over North America, a region particularly well covered by the data. For each surface-wave mode where phase-anomaly observations are available, I derive a wide spectrum of plausible, differently damped solutions; I then conduct a trade-off analysis, and select as optimal solution model the one associated with the point of maximum curvature on the trade-off curve. I repeat this exercise in both theoretical frameworks, to find that selected scattering and ray theoretical phase-velocity maps are coincident in pattern, and differ only slightly in amplitude.

  19. Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang

    2016-12-01

    In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.

  20. A note on the efficiencies of sampling strategies in two-stage Bayesian regional fine mapping of a quantitative trait.

    PubMed

    Chen, Zhijian; Craiu, Radu V; Bull, Shelley B

    2014-11-01

    In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.

  1. Impacts of Climate Change on Native Landcover: Seeking Future Climatic Refuges.

    PubMed

    Zanin, Marina; Mangabeira Albernaz, Ana Luisa

    2016-01-01

    Climate change is a driver for diverse impacts on global biodiversity. We investigated its impacts on native landcover distribution in South America, seeking to predict its effect as a new force driving habitat loss and population isolation. Moreover, we mapped potential future climatic refuges, which are likely to be key areas for biodiversity conservation under climate change scenarios. Climatically similar native landcovers were aggregated using a decision tree, generating a reclassified landcover map, from which 25% of the map's coverage was randomly selected to fuel distribution models. We selected the best geographical distribution models among twelve techniques, validating the predicted distribution for current climate with the landcover map and used the best technique to predict the future distribution. All landcover categories showed changes in area and displacement of the latitudinal/longitudinal centroid. Closed vegetation was the only landcover type predicted to expand its distributional range. The range contractions predicted for other categories were intense, even suggesting extirpation of the sparse vegetation category. The landcover refuges under future climate change represent a small proportion of the South American area and they are disproportionately represented and unevenly distributed, predominantly occupying five of 26 South American countries. The predicted changes, regardless of their direction and intensity, can put biodiversity at risk because they are expected to occur in the near future in terms of the temporal scales of ecological and evolutionary processes. Recognition of the threat of climate change allows more efficient conservation actions.

  2. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  3. Toward a national fuels mapping strategy: Lessons from selected mapping programs

    USGS Publications Warehouse

    Loveland, Thomas R.

    2001-01-01

    The establishment of a robust national fuels mapping program must be based on pertinent lessons from relevant national mapping programs. Many large-area mapping programs are under way in numerous Federal agencies. Each of these programs follows unique strategies to achieve mapping goals and objectives. Implementation approaches range from highly centralized programs that use tightly integrated standards and dedicated staff, to dispersed programs that permit considerable flexibility. One model facilitates national consistency, while the other allows accommodation of locally relevant conditions and issues. An examination of the programmatic strategies of four national vegetation and land cover mapping initiatives can identify the unique approaches, accomplishments, and lessons of each that should be considered in the design of a national fuel mapping program. The first three programs are the U.S. Geological Survey Gap Analysis Program, the U.S. Geological Survey National Land Cover Characterization Program, and the U.S. Fish and Wildlife Survey National Wetlands Inventory. A fourth program, the interagency Multiresolution Land Characterization Program, offers insights in the use of partnerships to accomplish mapping goals. Collectively, the programs provide lessons, guiding principles, and other basic concepts that can be used to design a successful national fuels mapping initiative.

  4. Capturing pair-wise epistatic effects associated with three agronomic traits in barley.

    PubMed

    Xu, Yi; Wu, Yajun; Wu, Jixiang

    2018-04-01

    Genetic association mapping has been widely applied to determine genetic markers favorably associated with a trait of interest and provide information for marker-assisted selection. Many association mapping studies commonly focus on main effects due to intolerable computing intensity. This study aims to select several sets of DNA markers with potential epistasis to maximize genetic variations of some key agronomic traits in barley. By doing so, we integrated a MDR (multifactor dimensionality reduction) method with a forward variable selection approach. This integrated approach was used to determine single nucleotide polymorphism pairs with epistasis effects associated with three agronomic traits: heading date, plant height, and grain yield in barley from the barley Coordinated Agricultural Project. Our results showed that four, seven, and five SNP pairs accounted for 51.06, 45.66 and 40.42% for heading date, plant height, and grain yield, respectively with epistasis being considered, while corresponding contributions to these three traits were 45.32, 31.39, 31.31%, respectively without epistasis being included. The results suggested that epistasis model was more effective than non-epistasis model in this study and can be more preferred for other applications.

  5. Toward accelerating landslide mapping with interactive machine learning techniques

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne

    2013-04-01

    Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.

  6. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education.

    PubMed

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution's curriculum, including tools for unveiling relationships inside curricular datasets. We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom's taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining.

  7. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education

    PubMed Central

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    Background No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution’s curriculum, including tools for unveiling relationships inside curricular datasets. Objective We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. Methods We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom’s taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. Results We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. Conclusions We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining. PMID:26624281

  8. A Unifying Mechanistic Model of Selective Attention in Spiking Neurons

    PubMed Central

    Bobier, Bruce; Stewart, Terrence C.; Eliasmith, Chris

    2014-01-01

    Visuospatial attention produces myriad effects on the activity and selectivity of cortical neurons. Spiking neuron models capable of reproducing a wide variety of these effects remain elusive. We present a model called the Attentional Routing Circuit (ARC) that provides a mechanistic description of selective attentional processing in cortex. The model is described mathematically and implemented at the level of individual spiking neurons, with the computations for performing selective attentional processing being mapped to specific neuron types and laminar circuitry. The model is used to simulate three studies of attention in macaque, and is shown to quantitatively match several observed forms of attentional modulation. Specifically, ARC demonstrates that with shifts of spatial attention, neurons may exhibit shifting and shrinking of receptive fields; increases in responses without changes in selectivity for non-spatial features (i.e. response gain), and; that the effect on contrast-response functions is better explained as a response-gain effect than as contrast-gain. Unlike past models, ARC embodies a single mechanism that unifies the above forms of attentional modulation, is consistent with a wide array of available data, and makes several specific and quantifiable predictions. PMID:24921249

  9. Running key mapping in a quantum stream cipher by the Yuen 2000 protocol

    NASA Astrophysics Data System (ADS)

    Shimizu, Tetsuya; Hirota, Osamu; Nagasako, Yuki

    2008-03-01

    A quantum stream cipher by Yuen 2000 protocol (so-called Y00 protocol or αη scheme) consisting of linear feedback shift register of short key is very attractive in implementing secure 40 Gbits/s optical data transmission, which is expected as a next-generation network. However, a basic model of the Y00 protocol with a very short key needs a careful design against fast correlation attacks as pointed out by Donnet This Brief Report clarifies an effectiveness of irregular mapping between running key and physical signals in the driver for selection of M -ary basis in the transmitter, and gives a design method. Consequently, quantum stream cipher by the Y00 protocol with our mapping has immunity against the proposed fast correlation attacks on a basic model of the Y00 protocol even if the key is very short.

  10. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  11. Cadastral data model established and perfected with 4S technology

    NASA Astrophysics Data System (ADS)

    He, Beijing; He, Jiang; He, Jianpeng

    1998-08-01

    Considering China's social essential system and the actual case of the formation of cadastral information in urban and rural area, and based on the 4S technology and the theory and method of canton's GPS geodetic data bench developed by the authors, we thoroughly research on some correlative technical problems about establishing and perfecting all-level's microcosmic cadastral data model (called model in the following) once again. Such problems as the following are included: cadastral, feature and topographic information and its modality and expressing method, classifying and grading the model, coordinate system to be selected, data basis for the model, the collecting method and digitalization of information, database's structural model, mathematical model and the establishing technology of 3 or more dimensional model, dynamic monitoring of and the development and application of the model. Then, the domestic and overseas application prospect is revealed. It also has the tendency to intrude markets cooperated with 'data bench' technology or RS image maps' all-analysis digital surveying and mapping technology.

  12. Seismic hazard in the eastern United States

    USGS Publications Warehouse

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  13. Learning to Map the Earth and Planets using a Google Earth - based Multi-student Game

    NASA Astrophysics Data System (ADS)

    De Paor, D. G.; Wild, S. C.; Dordevic, M.

    2011-12-01

    We report on progress in developing an interactive geological and geophysical mapping game employing the Google Earth, Google Moon, and Goole Mars virtual globes. Working in groups of four, students represent themselves on the Google Earth surface by selecting an avatar. One of the group drives to each field stop in a model vehicle using game-like controls. When they arrive at a field stop and get out of their field vehicle, students can control their own avatars' movements independently and can communicate with one another by text message. They are geo-fenced and receive automatic messages if they wander off target. Individual movements are logged and stored in a MySQL database for later analysis. Students collaborate on mapping decisions and submit a report to their instructor through a Javascript interface to the Google Earth API. Unlike real mapping, students are not restricted by geographic access and can engage in comparative mapping on different planets. Using newly developed techniques, they can also explore and map the sub-surface down to the core-mantle boundary. Virtual specimens created with a 3D scanner, Gigapan images of outcrops, and COLLADA models of mantle structures such as subducted lithospheric slabs all contribute to an engaging learning experience.

  14. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  15. [Health vulnerability mapping in the Community of Madrid (Spain)].

    PubMed

    Ramasco-Gutiérrez, Milagros; Heras-Mosteiro, Julio; Garabato-González, Sonsoles; Aránguez-Ruiz, Emiliano; Aguirre Martín-Gil, Ramón

    The Public Health General Directorate of Madrid has developed a health vulnerability mapping methodology to assist regional social health teams in health planning, prioritisation and intervention based on a model of social determinants of health and an equity approach. This process began with the selection of areas with the worst social indicators in health vulnerability. Then, key stakeholders of the region jointly identified priority areas of intervention and developed a consensual plan of action. We present the outcomes of this experience and its connection with theoretical models of asset-based community development, health-integrated georeferencing systems and community health interventions. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. CAFÉ-Map: Context Aware Feature Mapping for mining high dimensional biomedical data.

    PubMed

    Minhas, Fayyaz Ul Amir Afsar; Asif, Amina; Arif, Muhammad

    2016-12-01

    Feature selection and ranking is of great importance in the analysis of biomedical data. In addition to reducing the number of features used in classification or other machine learning tasks, it allows us to extract meaningful biological and medical information from a machine learning model. Most existing approaches in this domain do not directly model the fact that the relative importance of features can be different in different regions of the feature space. In this work, we present a context aware feature ranking algorithm called CAFÉ-Map. CAFÉ-Map is a locally linear feature ranking framework that allows recognition of important features in any given region of the feature space or for any individual example. This allows for simultaneous classification and feature ranking in an interpretable manner. We have benchmarked CAFÉ-Map on a number of toy and real world biomedical data sets. Our comparative study with a number of published methods shows that CAFÉ-Map achieves better accuracies on these data sets. The top ranking features obtained through CAFÉ-Map in a gene profiling study correlate very well with the importance of different genes reported in the literature. Furthermore, CAFÉ-Map provides a more in-depth analysis of feature ranking at the level of individual examples. CAFÉ-Map Python code is available at: http://faculty.pieas.edu.pk/fayyaz/software.html#cafemap . The CAFÉ-Map package supports parallelization and sparse data and provides example scripts for classification. This code can be used to reconstruct the results given in this paper. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. In silico mapping of quantitative trait loci in maize.

    PubMed

    Parisseaux, B; Bernardo, R

    2004-08-01

    Quantitative trait loci (QTL) are most often detected through designed mapping experiments. An alternative approach is in silico mapping, whereby genes are detected using existing phenotypic and genomic databases. We explored the usefulness of in silico mapping via a mixed-model approach in maize (Zea mays L.). Specifically, our objective was to determine if the procedure gave results that were repeatable across populations. Multilocation data were obtained from the 1995-2002 hybrid testing program of Limagrain Genetics in Europe. Nine heterotic patterns comprised 22,774 single crosses. These single crosses were made from 1,266 inbreds that had data for 96 simple sequence repeat (SSR) markers. By a mixed-model approach, we estimated the general combining ability effects associated with marker alleles in each heterotic pattern. The numbers of marker loci with significant effects--37 for plant height, 24 for smut [Ustilago maydis (DC.) Cda.] resistance, and 44 for grain moisture--were consistent with previous results from designed mapping experiments. Each trait had many loci with small effects and few loci with large effects. For smut resistance, a marker in bin 8.05 on chromosome 8 had a significant effect in seven (out of a maximum of 18) instances. For this major QTL, the maximum effect of an allele substitution ranged from 5.4% to 41.9%, with an average of 22.0%. We conclude that in silico mapping via a mixed-model approach can detect associations that are repeatable across different populations. We speculate that in silico mapping will be more useful for gene discovery than for selection in plant breeding programs. Copyright 2004 Springer-Verlag

  18. Fitting Multimeric Protein Complexes into Electron Microscopy Maps Using 3D Zernike Descriptors

    PubMed Central

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-01-01

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root mean square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases. PMID:22417139

  19. Fitting multimeric protein complexes into electron microscopy maps using 3D Zernike descriptors.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-06-14

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three-dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root-mean-square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases.

  20. Shape selection in Landsat time series: a tool for monitoring forest dynamics.

    PubMed

    Moisen, Gretchen G; Meyer, Mary C; Schroeder, Todd A; Liao, Xiyue; Schleeweis, Karen G; Freeman, Elizabeth A; Toney, Chris

    2016-10-01

    We present a new methodology for fitting nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral band or index of choice in temporal Landsat data, our method delivers a smoothed rendition of the trajectory constrained to behave in an ecologically sensible manner, reflecting one of seven possible 'shapes'. It also provides parameters summarizing the patterns of each change including year of onset, duration, magnitude, and pre- and postchange rates of growth or recovery. Through a case study featuring fire, harvest, and bark beetle outbreak, we illustrate how resultant fitted values and parameters can be fed into empirical models to map disturbance causal agent and tree canopy cover changes coincident with disturbance events through time. We provide our code in the r package ShapeSelectForest on the Comprehensive R Archival Network and describe our computational approaches for running the method over large geographic areas. We also discuss how this methodology is currently being used for forest disturbance and attribute mapping across the conterminous United States. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  1. Association Analysis of Arsenic-Induced Straighthead in Rice (Oryza sativa L.) Based on the Selected Population with a Modified Model.

    PubMed

    Li, Xiaobai; Hu, Biaolin; Pan, Xuhao; Zhang, Ning; Wu, Dianxing

    2017-01-01

    A rice physiological disorder makes mature panicle keep erect with empty grains termed as "straighthead." Straighthead causes yield losses and is a serious threat to rice production worldwide. Here, a new study of association mapping was conducted to identify QTL involved in straighthead. A subset of 380 accessions was selected from the USDA rice core collection and genotyped with 72 genome-wide SSR markers. An optimal model implemented with principle components (PCs) was used in this association mapping. As a result, five markers were identified to be significantly associated with straighthead. Three of them, RM263, RM169, and RM224, were consistent with a previous study. Three markers, RM475, RM263, and RM19, had a resistant allele associated with a decrease in straighthead rating (straighthead rating ≤ 4.8). In contrast, the two other marker loci RM169 and RM224 had a few susceptible alleles associated with an increase in straighthead rating (straighthead rating ≥ 8.7). Interestingly, RM475 is close to QTL " qSH-2 " and " AsS " with straighthead resistance, which was reported in two studies on linkage mapping of straighthead. This finding adds to previous work and is useful for further genetic study of straighthead.

  2. NFκB-Associated Pathways in Progression of Chemoresistance to 5-Fluorouracil in an In Vitro Model of Colonic Carcinoma.

    PubMed

    Körber, Maria Isabel; Staribacher, Anna; Ratzenböck, Ina; Steger, Günther; Mader, Robert M

    2016-04-01

    Drug resistance to 5-fluorouracil (5-FU) is a major obstacle in colonic cancer treatment. Activation of nuclear factor-kappa B (NFκB), mitogen-activated protein kinase kinase kinase 8 (MAP3K8) and protein kinase B (AKT) is thought to protect cancer cells against therapy-induced cytotoxicity. Using cytotoxicity assays and immunoblotting, the impact of inhibitory strategies addressing NFκB, AKT and MAP3K8 in chemoresistance was evaluated in a colonic cancer model in vitro. This model consisted of the cell lines SW480 and SW620, and three subclones with increasing degrees of chemoresistance in order to mimic the development of secondary resistance. NFκB protein p65 was selectively activated in all resistant cell lines. Consequently, several inhibitors of NFκB, MAP3K8 and AKT effectively circumvented this chemoresistance. As a cellular reaction, NFκB inhibition may trigger a feedback loop resulting in activation of extracellular signal-regulated kinase. The results suggest that chemoresistance to 5-FU in this colonic carcinoma model (cell lines SW480 and SW620) is strongly dependent on NFκB activation. The efficacy of MAP3K8 inhibition in our model potentially uncovers a new mechanism to circumvent 5-FU resistance. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  3. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    NASA Astrophysics Data System (ADS)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.

  4. The European sea bass Dicentrarchus labrax genome puzzle: comparative BAC-mapping and low coverage shotgun sequencing

    PubMed Central

    2010-01-01

    Background Food supply from the ocean is constrained by the shortage of domesticated and selected fish. Development of genomic models of economically important fishes should assist with the removal of this bottleneck. European sea bass Dicentrarchus labrax L. (Moronidae, Perciformes, Teleostei) is one of the most important fishes in European marine aquaculture; growing genomic resources put it on its way to serve as an economic model. Results End sequencing of a sea bass genomic BAC-library enabled the comparative mapping of the sea bass genome using the three-spined stickleback Gasterosteus aculeatus genome as a reference. BAC-end sequences (102,690) were aligned to the stickleback genome. The number of mappable BACs was improved using a two-fold coverage WGS dataset of sea bass resulting in a comparative BAC-map covering 87% of stickleback chromosomes with 588 BAC-contigs. The minimum size of 83 contigs covering 50% of the reference was 1.2 Mbp; the largest BAC-contig comprised 8.86 Mbp. More than 22,000 BAC-clones aligned with both ends to the reference genome. Intra-chromosomal rearrangements between sea bass and stickleback were identified. Size distributions of mapped BACs were used to calculate that the genome of sea bass may be only 1.3 fold larger than the 460 Mbp stickleback genome. Conclusions The BAC map is used for sequencing single BACs or BAC-pools covering defined genomic entities by second generation sequencing technologies. Together with the WGS dataset it initiates a sea bass genome sequencing project. This will allow the quantification of polymorphisms through resequencing, which is important for selecting highly performing domesticated fish. PMID:20105308

  5. Profile-Based LC-MS Data Alignment—A Bayesian Approach

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2014-01-01

    A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872

  6. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  7. Global maps of streamflow characteristics based on observations from several thousand catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert

    2016-04-01

    Streamflow (Q) estimation in ungauged catchments is one of the greatest challenges facing hydrologists. Observed Q from three to four thousand small-to-medium sized catchments (10--10 000~km^2) around the globe were used to train neural network ensembles to estimate Q characteristics based on climate and physiographic characteristics of the catchments. In total 17 Q characteristics were selected, including mean annual Q, baseflow index, and a number of flow percentiles. Testing coefficients of determination for the estimation of the Q characteristics ranged from 0.55 for the baseflow recession constant to 0.93 for the Q timing. Overall, climate indices dominated among the predictors. Predictors related to soils and geology were relatively unimportant, perhaps due to their data quality. The trained neural network ensembles were subsequently applied spatially over the entire ice-free land surface, resulting in global maps of the Q characteristics (0.125° resolution). These maps possess several unique features: they represent observation-driven estimates; are based on an unprecedentedly large set of catchments; and have associated uncertainty estimates. The maps can be used for various hydrological applications, including the diagnosis of macro-scale hydrological models. To demonstrate this, the produced maps were compared to equivalent maps derived from the simulated daily Q of four macro-scale hydrological models, highlighting various opportunities for improvement in model Q behavior. The produced dataset is available via http://water.jrc.ec.europa.eu.

  8. Mapping current and potential distribution of non-native Prosopis juliflora in the Afar region of Ethiopia

    USGS Publications Warehouse

    Wakie, Tewodros; Evangelista, Paul H.; Jarnevich, Catherine S.; Laituri, Melinda

    2014-01-01

    We used correlative models with species occurrence points, Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indices, and topo-climatic predictors to map the current distribution and potential habitat of invasive Prosopis juliflora in Afar, Ethiopia. Time-series of MODIS Enhanced Vegetation Indices (EVI) and Normalized Difference Vegetation Indices (NDVI) with 250 m2 spatial resolution were selected as remote sensing predictors for mapping distributions, while WorldClim bioclimatic products and generated topographic variables from the Shuttle Radar Topography Mission product (SRTM) were used to predict potential infestations. We ran Maxent models using non-correlated variables and the 143 species-occurrence points. Maxent generated probability surfaces were converted into binary maps using the 10-percentile logistic threshold values. Performances of models were evaluated using area under the receiver-operating characteristic (ROC) curve (AUC). Our results indicate that the extent of P. juliflora invasion is approximately 3,605 km2 in the Afar region (AUC = 0.94), while the potential habitat for future infestations is 5,024 km2 (AUC = 0.95). Our analyses demonstrate that time-series of MODIS vegetation indices and species occurrence points can be used with Maxent modeling software to map the current distribution of P. juliflora, while topo-climatic variables are good predictors of potential habitat in Ethiopia. Our results can quantify current and future infestations, and inform management and policy decisions for containing P. juliflora. Our methods can also be replicated for managing invasive species in other East African countries.

  9. Seasonal Habitat Use by Greater Sage-Grouse (Centrocercus urophasianus) on a Landscape with Low Density Oil and Gas Development.

    PubMed

    Rice, Mindy B; Rossi, Liza G; Apa, Anthony D

    2016-01-01

    Fragmentation of the sagebrush (Artemisia spp.) ecosystem has led to concern about a variety of sagebrush obligates including the greater sage-grouse (Centrocercus urophasianus). Given the increase of energy development within greater sage-grouse habitats, mapping seasonal habitats in pre-development populations is critical. The North Park population in Colorado is one of the largest and most stable in the state and provides a unique case study for investigating resource selection at a relatively low level of energy development compared to other populations both within and outside the state. We used locations from 117 radio-marked female greater sage-grouse in North Park, Colorado to develop seasonal resource selection models. We then added energy development variables to the base models at both a landscape and local scale to determine if energy variables improved the fit of the seasonal models. The base models for breeding and winter resource selection predicted greater use in large expanses of sagebrush whereas the base summer model predicted greater use along the edge of riparian areas. Energy development variables did not improve the winter or the summer models at either scale of analysis, but distance to oil/gas roads slightly improved model fit at both scales in the breeding season, albeit in opposite ways. At the landscape scale, greater sage-grouse were closer to oil/gas roads whereas they were further from oil/gas roads at the local scale during the breeding season. Although we found limited effects from low level energy development in the breeding season, the scale of analysis can influence the interpretation of effects. The lack of strong effects from energy development may be indicative that energy development at current levels are not impacting greater sage-grouse in North Park. Our baseline seasonal resource selection maps can be used for conservation to help identify ways of minimizing the effects of energy development.

  10. Mars-GRAM 2010: Improving the Precision of Mars-GRAM

    NASA Technical Reports Server (NTRS)

    Justh, H. L.; Justus, C. G.; Ramey, H. S.

    2011-01-01

    It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for Thermal Emission Spectrometer (TES) MapYear=0 and large optical depth values, such as tau=3, is less than realistic. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM 2005 has been validated against Radio Science data, and both nadir and limb data from TES. Traditional Mars-GRAM options for representing the mean atmosphere along entry corridors include: (1) TES mapping year 0, with user-controlled dust optical depth and Mars-GRAM data interpolated from NASA Ames Mars General Circulation Model (MGCM) results driven by selected values of globally-uniform dust optical depth, or (2) TES mapping years 1 and 2, with Mars-GRAM data coming from MGCM results driven by observed TES dust optical depth. From the surface to 80 km altitude, Mars-GRAM is based on NASA Ames MGCM. Above 80 km, Mars-GRAM is based on the University of Michigan Mars Thermospheric General Circulation Model (MTGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This choice of data has led to discrepancies that have become apparent during recent sensitivity studies for MapYear=0 and large optical depths. Unrealistic energy absorption by time-invariant atmospheric dust leads to an unrealistic thermal energy balance on the polar caps. The outcome is an inaccurate cycle of condensation/sublimation of the polar caps and, as a consequence, an inaccurate cycle of total atmospheric mass and global-average surface pressure. Under an assumption of unchanged temperature profile and hydrostatic equilibrium, a given percentage change in surface pressure would produce a corresponding percentage change in density at all altitudes. Consequently, the final result of a change in surface pressure is an imprecise atmospheric density at all altitudes.

  11. Predicting temperate forest stand types using only structural profiles from discrete return airborne lidar

    NASA Astrophysics Data System (ADS)

    Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.

    2018-02-01

    Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.

  12. Contemplating the GANE model using an extreme case paradigm.

    PubMed

    Geva, Ronny

    2016-01-01

    Early experiences play a crucial role in programming brain function, affecting selective attention, learning, and memory. Infancy literature suggests an extension of the GANE (glutamate amplifies noradrenergic effects) model to conditions with minimal priority-map inputs, yet suggests qualifications by noting that its efficacy is increased when tonic levels of arousal are maintained in an optimal range, in manners that are age and exposure dependent.

  13. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    PubMed

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. A high density genetic map and QTL for agronomic and yield traits in Foxtail millet [Setaria italica (L.) P. Beauv].

    PubMed

    Fang, Xiaomei; Dong, Kongjun; Wang, Xiaoqin; Liu, Tianpeng; He, Jihong; Ren, Ruiyu; Zhang, Lei; Liu, Rui; Liu, Xueying; Li, Man; Huang, Mengzhu; Zhang, Zhengsheng; Yang, Tianyu

    2016-05-04

    Foxtail millet [Setaria italica (L.) P. Beauv.], a crop of historical importance in China, has been adopted as a model crop for studying C-4 photosynthesis, stress biology and biofuel traits. Construction of a high density genetic map and identification of stable quantitative trait loci (QTL) lay the foundation for marker-assisted selection for agronomic traits and yield improvement. A total of 10598 SSR markers were developed according to the reference genome sequence of foxtail millet cultivar 'Yugu1'. A total of 1013 SSR markers showing polymorphism between Yugu1 and Longgu7 were used to genotype 167 individuals from a Yugu1 × Longgu7 F2 population, and a high density genetic map was constructed. The genetic map contained 1035 loci and spanned 1318.8 cM with an average distance of 1.27 cM between adjacent markers. Based on agronomic and yield traits identified in 2 years, 29 QTL were identified for 11 traits with combined analysis and single environment analysis. These QTL explained from 7.0 to 14.3 % of phenotypic variation. Favorable QTL alleles for peduncle length originated from Longgu7 whereas favorable alleles for the other traits originated from Yugu1 except for qLMS6.1. New SSR markers, a high density genetic map and QTL identified for agronomic and yield traits lay the ground work for functional gene mapping, map-based cloning and marker-assisted selection in foxtail millet.

  15. A high-resolution physically-based global flood hazard map

    NASA Astrophysics Data System (ADS)

    Kaheil, Y.; Begnudelli, L.; McCollum, J.

    2016-12-01

    We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.

  16. GIS applications for military operations in coastal zones

    USGS Publications Warehouse

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E.L.; Welch, R.

    2009-01-01

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations. ?? 2008 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  17. GIS applications for military operations in coastal zones

    NASA Astrophysics Data System (ADS)

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E. L.; Welch, R.

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations.

  18. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  19. Ensemble classification of individual Pinus crowns from multispectral satellite imagery and airborne LiDAR

    NASA Astrophysics Data System (ADS)

    Kukunda, Collins B.; Duque-Lazo, Joaquín; González-Ferreiro, Eduardo; Thaden, Hauke; Kleinn, Christoph

    2018-03-01

    Distinguishing tree species is relevant in many contexts of remote sensing assisted forest inventory. Accurate tree species maps support management and conservation planning, pest and disease control and biomass estimation. This study evaluated the performance of applying ensemble techniques with the goal of automatically distinguishing Pinus sylvestris L. and Pinus uncinata Mill. Ex Mirb within a 1.3 km2 mountainous area in Barcelonnette (France). Three modelling schemes were examined, based on: (1) high-density LiDAR data (160 returns m-2), (2) Worldview-2 multispectral imagery, and (3) Worldview-2 and LiDAR in combination. Variables related to the crown structure and height of individual trees were extracted from the normalized LiDAR point cloud at individual-tree level, after performing individual tree crown (ITC) delineation. Vegetation indices and the Haralick texture indices were derived from Worldview-2 images and served as independent spectral variables. Selection of the best predictor subset was done after a comparison of three variable selection procedures: (1) Random Forests with cross validation (AUCRFcv), (2) Akaike Information Criterion (AIC) and (3) Bayesian Information Criterion (BIC). To classify the species, 9 regression techniques were combined using ensemble models. Predictions were evaluated using cross validation and an independent dataset. Integration of datasets and models improved individual tree species classification (True Skills Statistic, TSS; from 0.67 to 0.81) over individual techniques and maintained strong predictive power (Relative Operating Characteristic, ROC = 0.91). Assemblage of regression models and integration of the datasets provided more reliable species distribution maps and associated tree-scale mapping uncertainties. Our study highlights the potential of model and data assemblage at improving species classifications needed in present-day forest planning and management.

  20. Three-dimensional dominant frequency mapping using autoregressive spectral analysis of atrial electrograms of patients in persistent atrial fibrillation.

    PubMed

    Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S

    2016-03-08

    Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.

  1. The OSIRIS-REx Mission Sample Site Selection Process

    NASA Astrophysics Data System (ADS)

    Beshore, Edward C.; Lauretta, Dante

    2014-11-01

    In September of 2016, the OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security, REgolith eXplorer) spacecraft will depart for asteroid (101955) Bennu, and in doing so, will turn an important corner in the exploration of the solar system. After arriving at Bennu in the fall of 2018, OSIRIS-REx will undertake a program of observations designed to select a site suitable for retrieving a sample that will be returned to the Earth in 2023. The third mission in NASA’s New Frontiers program, OSIRIS-REx will return over 60 grams from Bennu’s surface.OSIRIS-REx is unique because the science team will have an operational role to play in preparing data products needed to select a sample site. These include products used to ensure flight system safety — topographic maps and shape models, temperature measurements, maps of hazards — as well as assessments of sampleability and science value. The timing and production of these will be presented, as will the high-level decision-making tools and processes for the interim and final site selection processes.

  2. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  3. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  4. Genetic signatures of natural selection in a model invasive ascidian

    PubMed Central

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-01-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616

  5. MultiGeMS: detection of SNVs from multiple samples using model selection on high-throughput sequencing data.

    PubMed

    Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping

    2016-05-15

    Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Land suitability assessment for wind power plant site selection using ANP-DEMATEL in a GIS environment: case study of Ardabil province, Iran.

    PubMed

    Azizi, Ali; Malekmohammadi, Bahram; Jafari, Hamid Reza; Nasiri, Hossein; Amini Parsa, Vahid

    2014-10-01

    Wind energy is a renewable energy resource that has increased in usage in most countries. Site selection for the establishment of large wind turbines, called wind farms, like any other engineering project, requires basic information and careful planning. This study assessed the possibility of establishing wind farms in Ardabil province in northwestern Iran by using a combination of analytic network process (ANP) and decision making trial and evaluation laboratory (DEMATEL) methods in a geographical information system (GIS) environment. DEMATEL was used to determine the criteria relationships. The weights of the criteria were determined using ANP and the overlaying process was done on GIS. Using 13 information layers in three main criteria including environmental, technical and economical, the land suitability map was produced and reclassified into 5 equally scored divisions from least suitable to most suitable areas. The results showed that about 6.68% of the area of Ardabil province is most suitable for establishment of wind turbines. Sensitivity analysis shows that significant portions of these most suitable zones coincide with suitable divisions of the input layers. The efficiency and accuracy of the hybrid model (ANP-DEMATEL) was evaluated and the results were compared to the ANP model. The sensitivity analysis, map classification, and factor weights for the two methods showed satisfactory results for the ANP-DEMATEL model in wind power plant site selection.

  7. An Expressed Sequence Tag (EST)-enriched genetic map of turbot (Scophthalmus maximus): a useful framework for comparative genomics across model and farmed teleosts

    PubMed Central

    2012-01-01

    Background The turbot (Scophthalmus maximus) is a relevant species in European aquaculture. The small turbot genome provides a source for genomics strategies to use in order to understand the genetic basis of productive traits, particularly those related to sex, growth and pathogen resistance. Genetic maps represent essential genomic screening tools allowing to localize quantitative trait loci (QTL) and to identify candidate genes through comparative mapping. This information is the backbone to develop marker-assisted selection (MAS) programs in aquaculture. Expressed sequenced tag (EST) resources have largely increased in turbot, thus supplying numerous type I markers suitable for extending the previous linkage map, which was mostly based on anonymous loci. The aim of this study was to construct a higher-resolution turbot genetic map using EST-linked markers, which will turn out to be useful for comparative mapping studies. Results A consensus gene-enriched genetic map of the turbot was constructed using 463 SNP and microsatellite markers in nine reference families. This map contains 438 markers, 180 EST-linked, clustered at 24 linkage groups. Linkage and comparative genomics evidences suggested additional linkage group fusions toward the consolidation of turbot map according to karyotype information. The linkage map showed a total length of 1402.7 cM with low average intermarker distance (3.7 cM; ~2 Mb). A global 1.6:1 female-to-male recombination frequency (RF) ratio was observed, although largely variable among linkage groups and chromosome regions. Comparative sequence analysis revealed large macrosyntenic patterns against model teleost genomes, significant hits decreasing from stickleback (54%) to zebrafish (20%). Comparative mapping supported particular chromosome rearrangements within Acanthopterygii and aided to assign unallocated markers to specific turbot linkage groups. Conclusions The new gene-enriched high-resolution turbot map represents a useful genomic tool for QTL identification, positional cloning strategies, and future genome assembling. This map showed large synteny conservation against model teleost genomes. Comparative genomics and data mining from landmarks will provide straightforward access to candidate genes, which will be the basis for genetic breeding programs and evolutionary studies in this species. PMID:22747677

  8. Probabilistic Seismic Hazard Maps for Ecuador

    NASA Astrophysics Data System (ADS)

    Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.

    2017-12-01

    A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.

  9. Application of support vector machines for copper potential mapping in Kerman region, Iran

    NASA Astrophysics Data System (ADS)

    Shabankareh, Mahdi; Hezarkhani, Ardeshir

    2017-04-01

    The first step in systematic exploration studies is mineral potential mapping, which involves classification of the study area to favorable and unfavorable parts. Support vector machines (SVM) are designed for supervised classification based on statistical learning theory. This method named support vector classification (SVC). This paper describes SVC model, which combine exploration data in the regional-scale for copper potential mapping in Kerman copper bearing belt in south of Iran. Data layers or evidential maps were in six datasets namely lithology, tectonic, airborne geophysics, ferric alteration, hydroxide alteration and geochemistry. The SVC modeling result selected 2220 pixels as favorable zones, approximately 25 percent of the study area. Besides, 66 out of 86 copper indices, approximately 78.6% of all, were located in favorable zones. Other main goal of this study was to determine how each input affects favorable output. For this purpose, the histogram of each normalized input data to its favorable output was drawn. The histograms of each input dataset for favorable output showed that each information layer had a certain pattern. These patterns of SVC results could be considered as regional copper exploration characteristics.

  10. Genome-Wide Association Mapping for Yield and Other Agronomic Traits in an Elite Breeding Population of Tropical Rice (Oryza sativa)

    PubMed Central

    Lalusin, Antonio; Borromeo, Teresita; Gregorio, Glenn; Hernandez, Jose; Virk, Parminder; Collard, Bertrand; McCouch, Susan R.

    2015-01-01

    Genome-wide association mapping studies (GWAS) are frequently used to detect QTL in diverse collections of crop germplasm, based on historic recombination events and linkage disequilibrium across the genome. Generally, diversity panels genotyped with high density SNP panels are utilized in order to assay a wide range of alleles and haplotypes and to monitor recombination breakpoints across the genome. By contrast, GWAS have not generally been performed in breeding populations. In this study we performed association mapping for 19 agronomic traits including yield and yield components in a breeding population of elite irrigated tropical rice breeding lines so that the results would be more directly applicable to breeding than those from a diversity panel. The population was genotyped with 71,710 SNPs using genotyping-by-sequencing (GBS), and GWAS performed with the explicit goal of expediting selection in the breeding program. Using this breeding panel we identified 52 QTL for 11 agronomic traits, including large effect QTLs for flowering time and grain length/grain width/grain-length-breadth ratio. We also identified haplotypes that can be used to select plants in our population for short stature (plant height), early flowering time, and high yield, and thus demonstrate the utility of association mapping in breeding populations for informing breeding decisions. We conclude by exploring how the newly identified significant SNPs and insights into the genetic architecture of these quantitative traits can be leveraged to build genomic-assisted selection models. PMID:25785447

  11. Examination of Parameters Affecting the House Prices by Multiple Regression Analysis and its Contributions to Earthquake-Based Urban Transformation

    NASA Astrophysics Data System (ADS)

    Denli, H. H.; Durmus, B.

    2016-12-01

    The purpose of this study is to examine the factors which may affect the apartment prices with multiple linear regression analysis models and visualize the results by value maps. The study is focused on a county of Istanbul - Turkey. Totally 390 apartments around the county Umraniye are evaluated due to their physical and locational conditions. The identification of factors affecting the price of apartments in the county with a population of approximately 600k is expected to provide a significant contribution to the apartment market.Physical factors are selected as the age, number of rooms, size, floor numbers of the building and the floor that the apartment is positioned in. Positional factors are selected as the distances to the nearest hospital, school, park and police station. Totally ten physical and locational parameters are examined by regression analysis.After the regression analysis has been performed, value maps are composed from the parameters age, price and price per square meters. The most significant of the composed maps is the price per square meters map. Results show that the location of the apartment has the most influence to the square meter price information of the apartment. A different practice is developed from the composed maps by searching the ability of using price per square meters map in urban transformation practices. By marking the buildings older than 15 years in the price per square meters map, a different and new interpretation has been made to determine the buildings, to which should be given priority during an urban transformation in the county.This county is very close to the North Anatolian Fault zone and is under the threat of earthquakes. By marking the apartments older than 15 years on the price per square meters map, both older and expensive square meters apartments list can be gathered. By the help of this list, the priority could be given to the selected higher valued old apartments to support the economy of the country during an earthquake loss. We may call this urban transformation as earthquake-based urban transformation.

  12. Regional mapping of aerosol population and surface albedo of Titan by the massive inversion of the Cassini/VIMS dataset

    NASA Astrophysics Data System (ADS)

    Rodriguez, S.; Cornet, T.; Maltagliati, L.; Appéré, T.; Le Mouelic, S.; Sotin, C.; Barnes, J. W.; Brown, R. H.

    2017-12-01

    Mapping Titan's surface albedo is a necessary step to give reliable constraints on its composition. However, even after the end of the Cassini mission, surface albedo maps of Titan, especially over large regions, are still very rare, the surface windows being strongly affected by atmospheric contributions (absorption, scattering). A full radiative transfer model is an essential tool to remove these effects, but too time-consuming to treat systematically the 50000 hyperspectral images VIMS acquired since the beginning of the mission. We developed a massive inversion of VIMS data based on lookup tables computed from a state-of-the-art radiative transfer model in pseudo-spherical geometry, updated with new aerosol properties coming from our analysis of observations acquired recently by VIMS (solar occultations and emission phase curves). Once the physical properties of gases, aerosols and surface are fixed, the lookup tables are built for the remaining free parameters: the incidence, emergence and azimuth angles, given by navigation; and two products (the aerosol opacity and the surface albedo at all wavelengths). The lookup table grid was carefully selected after thorough testing. The data inversion on these pre-computed spectra (opportunely interpolated) is more than 1000 times faster than recalling the full radiative transfer at each minimization step. We present here the results from selected flybys. We invert mosaics composed by couples of flybys observing the same area at two different times. The composite albedo maps do not show significant discontinuities in any of the surface windows, suggesting a robust correction of the effects of the geometry (and thus the aerosols) on the observations. Maps of aerosol and albedo uncertainties are also provided, along with absolute errors. We are thus able to provide reliable surface albedo maps at pixel scale for entire regions of Titan and for the whole VIMS spectral range.

  13. KIDFamMap: a database of kinase-inhibitor-disease family maps for kinase inhibitor selectivity and binding mechanisms

    PubMed Central

    Chiu, Yi-Yuan; Lin, Chih-Ta; Huang, Jhang-Wei; Hsu, Kai-Cheng; Tseng, Jen-Hu; You, Syuan-Ren; Yang, Jinn-Moon

    2013-01-01

    Kinases play central roles in signaling pathways and are promising therapeutic targets for many diseases. Designing selective kinase inhibitors is an emergent and challenging task, because kinases share an evolutionary conserved ATP-binding site. KIDFamMap (http://gemdock.life.nctu.edu.tw/KIDFamMap/) is the first database to explore kinase-inhibitor families (KIFs) and kinase-inhibitor-disease (KID) relationships for kinase inhibitor selectivity and mechanisms. This database includes 1208 KIFs, 962 KIDs, 55 603 kinase-inhibitor interactions (KIIs), 35 788 kinase inhibitors, 399 human protein kinases, 339 diseases and 638 disease allelic variants. Here, a KIF can be defined as follows: (i) the kinases in the KIF with significant sequence similarity, (ii) the inhibitors in the KIF with significant topology similarity and (iii) the KIIs in the KIF with significant interaction similarity. The KIIs within a KIF are often conserved on some consensus KIDFamMap anchors, which represent conserved interactions between the kinase subsites and consensus moieties of their inhibitors. Our experimental results reveal that the members of a KIF often possess similar inhibition profiles. The KIDFamMap anchors can reflect kinase conformations types, kinase functions and kinase inhibitor selectivity. We believe that KIDFamMap provides biological insights into kinase inhibitor selectivity and binding mechanisms. PMID:23193279

  14. NASA Lunar and Planetary Mapping and Modeling

    NASA Astrophysics Data System (ADS)

    Day, Brian; Law, Emily

    2016-10-01

    NASA's Lunar and Planetary Mapping and Modeling Portals provide web-based suites of interactive visualization and analysis tools to enable mission planners, planetary scientists, students, and the general public to access mapped lunar data products from past and current missions for the Moon, Mars, and Vesta. New portals for additional planetary bodies are being planned. This presentation will recap some of the enhancements to these products during the past year and preview work currently being undertaken.New data products added to the Lunar Mapping and Modeling Portal (LMMP) include both generalized products as well as polar data products specifically targeting potential sites for the Resource Prospector mission. New tools being developed include traverse planning and surface potential analysis. Current development work on LMMP also includes facilitating mission planning and data management for lunar CubeSat missions. Looking ahead, LMMP is working with the NASA Astromaterials Office to integrate with their Lunar Apollo Sample database to help better visualize the geographic contexts of retrieved samples. All of this will be done within the framework of a new user interface which, among other improvements, will provide significantly enhanced 3D visualizations and navigation.Mars Trek, the project's Mars portal, has now been assigned by NASA's Planetary Science Division to support site selection and analysis for the Mars 2020 Rover mission as well as for the Mars Human Landing Exploration Zone Sites, and is being enhanced with data products and analysis tools specifically requested by the proposing teams for the various sites. NASA Headquarters is giving high priority to Mars Trek's use as a means to directly involve the public in these upcoming missions, letting them explore the areas the agency is focusing upon, understand what makes these sites so fascinating, follow the selection process, and get caught up in the excitement of exploring Mars.The portals also serve as outstanding resources for education and outreach. As such, they have been designated by NASA's Science Mission Directorate as key supporting infrastructure for the new education programs selected through the division's recent CAN.

  15. Assessing habitat connectivity for ground-dwelling animals in an urban environment.

    PubMed

    Braaker, S; Moretti, M; Boesch, R; Ghazoul, J; Obrist, M K; Bontadina, F

    To ensure viable species populations in fragmented landscapes, individuals must be able to move between suitable habitat patches. Despite the increased interest in biodiversity assessment in urban environments, the ecological relevance of habitat connectivity in highly fragmented landscapes remains largely unknown. The first step to understanding the role of habitat connectivity in urban ecology is the challenging task of assessing connectivity in the complex patchwork of contrasting habitats that is found in cities. We developed a data-based framework, minimizing the use of subjective assumptions, to assess habitat connectivity that consists of the following sequential steps: (1) identification of habitat preference based on empirical habitat-use data; (2) derivation of habitat resistance surfaces evaluating various transformation functions; (3) modeling of different connectivity maps with electrical circuit theory (Circuitscape), a method considering all possible pathways across the landscape simultaneously; and (4) identification of the best connectivity map with information-theoretic model selection. We applied this analytical framework to assess habitat connectivity for the European hedgehog Erinaceus europaeus, a model species for ground-dwelling animals, in the city of Zurich, Switzerland, using GPS track points from 40 individuals. The best model revealed spatially explicit connectivity “pinch points,” as well as multiple habitat connections. Cross-validation indicated the general validity of the selected connectivity model. The results show that both habitat connectivity and habitat quality affect the movement of urban hedgehogs (relative importance of the two variables was 19.2% and 80.8%, respectively), and are thus both relevant for predicting urban animal movements. Our study demonstrates that even in the complex habitat patchwork of cities, habitat connectivity plays a major role for ground-dwelling animal movement. Data-based habitat connectivity maps can thus serve as an important tool for city planners to identify habitat corridors and plan appropriate management and conservation measures for urban animals. The analytical framework we describe to model such connectivity maps is generally applicable to different types of habitat-use data and can be adapted to the movement scale of the focal species. It also allows evaluation of the impact of future landscape changes or management scenarios on habitat connectivity in urban landscapes.

  16. THE SOUTHWEST REGIONAL GAP PROJECT: A DATABASE MODEL FOR REGIONAL LANDSCAPE ASSESSMENT, RESOURCE PLANNING, AND VULNERABILITY ANALYSIS

    EPA Science Inventory

    The Gap Analysis Program (GAP) is a national interagency program that maps the distribution of plant communities and selected animal species and compares these distributions with land stewardship to identify biotic elements at potential risk of endangerment. Acquisition of primar...

  17. FIAMODEL: Users Guide Version 3.0.

    Treesearch

    Scott A. Pugh; David D. Reed; Kurt S. Pregitzer; Patrick D. Miles

    2002-01-01

    FIAMODEL is a geographic information system (GIS program used to summarize Forest Inventory and Analysis (FIA, USDA Forest Service) data such as volume. The model runs in ArcView and allows users to select FIA plots with heads-up-digitizing, overlays of digital map layers, or queries based on specific plot attributes.

  18. Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.

    PubMed

    Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2017-12-01

    It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.

  19. Many-to-one form-to-function mapping weakens parallel morphological evolution.

    PubMed

    Thompson, Cole J; Ahmed, Newaz I; Veen, Thor; Peichel, Catherine L; Hendry, Andrew P; Bolnick, Daniel I; Stuart, Yoel E

    2017-11-01

    Evolutionary ecologists aim to explain and predict evolutionary change under different selective regimes. Theory suggests that such evolutionary prediction should be more difficult for biomechanical systems in which different trait combinations generate the same functional output: "many-to-one mapping." Many-to-one mapping of phenotype to function enables multiple morphological solutions to meet the same adaptive challenges. Therefore, many-to-one mapping should undermine parallel morphological evolution, and hence evolutionary predictability, even when selection pressures are shared among populations. Studying 16 replicate pairs of lake- and stream-adapted threespine stickleback (Gasterosteus aculeatus), we quantified three parts of the teleost feeding apparatus and used biomechanical models to calculate their expected functional outputs. The three feeding structures differed in their form-to-function relationship from one-to-one (lower jaw lever ratio) to increasingly many-to-one (buccal suction index, opercular 4-bar linkage). We tested for (1) weaker linear correlations between phenotype and calculated function, and (2) less parallel evolution across lake-stream pairs, in the many-to-one systems relative to the one-to-one system. We confirm both predictions, thus supporting the theoretical expectation that increasing many-to-one mapping undermines parallel evolution. Therefore, sole consideration of morphological variation within and among populations might not serve as a proxy for functional variation when multiple adaptive trait combinations exist. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  20. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Selecting PPE for the Workplace (Personal Protective Equipment for the Eyes and Face)

    MedlinePlus

    ... Additional References Site Map Credits Selecting Personal Protective Equipment (PPE) for the Workplace Impact Heat Chemicals Dust Optical Radiation OSHA Requirements Home | Selecting Personal Protective Equipment (PPE) for the Workplace | OSHA Requirements Site Map | ...

  2. Marker-Assisted Introgression in Backcross Breeding Programs

    PubMed Central

    Visscher, P. M.; Haley, C. S.; Thompson, R.

    1996-01-01

    The efficiency of marker-assisted introgression in backcross populations derived from inbred lines was investigated by simulation. Background genotypes were simulated assuming that a genetic model of many genes of small effects in coupling phase explains the observed breed difference and variance in backcross populations. Markers were efficient in introgression backcross programs for simultaneously introgressing an allele and selecting for the desired genomic background. Using a marker spacing of 10-20 cM gave an advantage of one to two backcross generations selection relative to random or phenotypic selection. When the position of the gene to be introgressed is uncertain, for example because its position was estimated from a trait gene mapping experiment, a chromosome segment should be introgressed that is likely to include the allele of interest. Even for relatively precisely mapped quantitative trait loci, flanking markers or marker haplotypes should cover ~10-20 cM around the estimated position of the gene, to ensure that the allele frequency does not decline in later backcross generations. PMID:8978075

  3. A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging

    PubMed Central

    Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-01-01

    Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052

  4. Updating Mars-GRAM to Increase the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hiliary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). During the Mars Science Laboratory (MSL) site selection process, it was discovered that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear set to 0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. As a preliminary fix to this pressure-density problem, density factor values were determined for tau=0.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented from work being done to derive better multipliers by including variation with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data. The addition of these more precise density factors to Mars-GRAM 2005 Release 1.4 will improve the results of the sensitivity studies done for large optical depths.

  5. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  6. SEIPS-based process modeling in primary care.

    PubMed

    Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T

    2017-04-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. SEIPS-Based Process Modeling in Primary Care

    PubMed Central

    Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter

    2016-01-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883

  8. [Development of an analyzing system for soil parameters based on NIR spectroscopy].

    PubMed

    Zheng, Li-Hua; Li, Min-Zan; Sun, Hong

    2009-10-01

    A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.

  9. Next Generation Mapping of Enological Traits in an F2 Interspecific Grapevine Hybrid Family

    PubMed Central

    Sun, Qi; Manns, David C.; Sacks, Gavin L.; Mansfield, Anna Katharine; Luby, James J.; Londo, Jason P.; Reisch, Bruce I.; Cadle-Davidson, Lance E.; Fennell, Anne Y.

    2016-01-01

    In winegrapes (Vitis spp.), fruit quality traits such as berry color, total soluble solids content (SS), malic acid content (MA), and yeast assimilable nitrogen (YAN) affect fermentation or wine quality, and are important traits in selecting new hybrid winegrape cultivars. Given the high genetic diversity and heterozygosity of Vitis species and their tendency to exhibit inbreeding depression, linkage map construction and quantitative trait locus (QTL) mapping has relied on F1 families with the use of simple sequence repeat (SSR) and other markers. This study presents the construction of a genetic map by single nucleotide polymorphisms identified through genotyping-by-sequencing (GBS) technology in an F2 mapping family of 424 progeny derived from a cross between the wild species V. riparia Michx. and the interspecific hybrid winegrape cultivar, ‘Seyval’. The resulting map has 1449 markers spanning 2424 cM in genetic length across 19 linkage groups, covering 95% of the genome with an average distance between markers of 1.67 cM. Compared to an SSR map previously developed for this F2 family, these results represent an improved map covering a greater portion of the genome with higher marker density. The accuracy of the map was validated using the well-studied trait berry color. QTL affecting YAN, MA and SS related traits were detected. A joint MA and SS QTL spans a region with candidate genes involved in the malate metabolism pathway. We present an analytical pipeline for calling intercross GBS markers and a high-density linkage map for a large F2 family of the highly heterozygous Vitis genus. This study serves as a model for further genetic investigations of the molecular basis of additional unique characters of North American hybrid wine cultivars and to enhance the breeding process by marker-assisted selection. The GBS protocols for identifying intercross markers developed in this study can be adapted for other heterozygous species. PMID:26974672

  10. A road map for multi-way calibration models.

    PubMed

    Escandar, Graciela M; Olivieri, Alejandro C

    2017-08-07

    A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.

  11. Identifying Greater Sage-Grouse source and sink habitats for conservation planning in an energy development landscape.

    PubMed

    Kirol, Christopher P; Beck, Jeffrey L; Huzurbazar, Snehalata V; Holloran, Matthew J; Miller, Scott N

    2015-06-01

    Conserving a declining species that is facing many threats, including overlap of its habitats with energy extraction activities, depends upon identifying and prioritizing the value of the habitats that remain. In addition, habitat quality is often compromised when source habitats are lost or fragmented due to anthropogenic development. Our objective was to build an ecological model to classify and map habitat quality in terms of source or sink dynamics for Greater Sage-Grouse (Centrocercus urophasianus) in the Atlantic Rim Project Area (ARPA), a developing coalbed natural gas field in south-central Wyoming, USA. We used occurrence and survival modeling to evaluate relationships between environmental and anthropogenic variables at multiple spatial scales and for all female summer life stages, including nesting, brood-rearing, and non-brooding females. For each life stage, we created resource selection functions (RSFs). We weighted the RSFs and combined them to form a female summer occurrence map. We modeled survival also as a function of spatial variables for nest, brood, and adult female summer survival. Our survival-models were mapped as survival probability functions individually and then combined with fixed vital rates in a fitness metric model that, when mapped, predicted habitat productivity (productivity map). Our results demonstrate a suite of environmental and anthropogenic variables at multiple scales that were predictive of occurrence and survival. We created a source-sink map by overlaying our female summer occurrence map and productivity map to predict habitats contributing to population surpluses (source habitats) or deficits (sink habitat) and low-occurrence habitats on the landscape. The source-sink map predicted that of the Sage-Grouse habitat within the ARPA, 30% was primary source, 29% was secondary source, 4% was primary sink, 6% was secondary sink, and 31% was low occurrence. Our results provide evidence that energy development and avoidance of energy infrastructure were probably reducing the amount of source habitat within the ARPA landscape. Our source-sink map provides managers with a means of prioritizing habitats for conservation planning based on source and sink dynamics. The spatial identification of high value (i.e., primary source) as well as suboptimal (i.e., primary sink) habitats allows for informed energy development to minimize effects on local wildlife populations.

  12. Fast periodic stimulation (FPS): a highly effective approach in fMRI brain mapping.

    PubMed

    Gao, Xiaoqing; Gentile, Francesco; Rossion, Bruno

    2018-06-01

    Defining the neural basis of perceptual categorization in a rapidly changing natural environment with low-temporal resolution methods such as functional magnetic resonance imaging (fMRI) is challenging. Here, we present a novel fast periodic stimulation (FPS)-fMRI approach to define face-selective brain regions with natural images. Human observers are presented with a dynamic stream of widely variable natural object images alternating at a fast rate (6 images/s). Every 9 s, a short burst of variable face images contrasting with object images in pairs induces an objective face-selective neural response at 0.111 Hz. A model-free Fourier analysis achieves a twofold increase in signal-to-noise ratio compared to a conventional block-design approach with identical stimuli and scanning duration, allowing to derive a comprehensive map of face-selective areas in the ventral occipito-temporal cortex, including the anterior temporal lobe (ATL), in all individual brains. Critically, periodicity of the desired category contrast and random variability among widely diverse images effectively eliminates the contribution of low-level visual cues, and lead to the highest values (80-90%) of test-retest reliability in the spatial activation map yet reported in imaging higher level visual functions. FPS-fMRI opens a new avenue for understanding brain function with low-temporal resolution methods.

  13. Mapping and localization for extraterrestrial robotic explorations

    NASA Astrophysics Data System (ADS)

    Xu, Fengliang

    In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)

  14. Medaka: a promising model animal for comparative population genomics

    PubMed Central

    Matsumoto, Yoshifumi; Oota, Hiroki; Asaoka, Yoichi; Nishina, Hiroshi; Watanabe, Koji; Bujnicki, Janusz M; Oda, Shoji; Kawamura, Shoji; Mitani, Hiroshi

    2009-01-01

    Background Within-species genome diversity has been best studied in humans. The international HapMap project has revealed a tremendous amount of single-nucleotide polymorphisms (SNPs) among humans, many of which show signals of positive selection during human evolution. In most of the cases, however, functional differences between the alleles remain experimentally unverified due to the inherent difficulty of human genetic studies. It would therefore be highly useful to have a vertebrate model with the following characteristics: (1) high within-species genetic diversity, (2) a variety of gene-manipulation protocols already developed, and (3) a completely sequenced genome. Medaka (Oryzias latipes) and its congeneric species, tiny fresh-water teleosts distributed broadly in East and Southeast Asia, meet these criteria. Findings Using Oryzias species from 27 local populations, we conducted a simple screening of nonsynonymous SNPs for 11 genes with apparent orthology between medaka and humans. We found medaka SNPs for which the same sites in human orthologs are known to be highly differentiated among the HapMap populations. Importantly, some of these SNPs show signals of positive selection. Conclusion These results indicate that medaka is a promising model system for comparative population genomics exploring the functional and adaptive significance of allelic differentiations. PMID:19426554

  15. LiDAR-Derived Flood-Inundation Maps for Real-Time Flood-Mapping Applications, Tar River Basin, North Carolina

    USGS Publications Warehouse

    Bales, Jerad D.; Wagner, Chad R.; Tighe, Kirsten C.; Terziotti, Silvia

    2007-01-01

    Flood-inundation maps were created for selected streamgage sites in the North Carolina Tar River basin. Light detection and ranging (LiDAR) data with a vertical accuracy of about 20 centimeters, provided by the Floodplain Mapping Information System of the North Carolina Floodplain Mapping Program, were processed to produce topographic data for the inundation maps. Bare-earth mass point LiDAR data were reprocessed into a digital elevation model with regularly spaced 1.5-meter by 1.5-meter cells. A tool was developed as part of this project to connect flow paths, or streams, that were inappropriately disconnected in the digital elevation model by such features as a bridge or road crossing. The Hydraulic Engineering Center-River Analysis System (HEC-RAS) model, developed by the U.S. Army Corps of Engineers, was used for hydraulic modeling at each of the study sites. Eleven individual hydraulic models were developed for the Tar River basin sites. Seven models were developed for reaches with a single gage, and four models were developed for reaches of the Tar River main stem that receive flow from major gaged tributaries, or reaches in which multiple gages were near one another. Combined, the Tar River hydraulic models included 272 kilometers of streams in the basin, including about 162 kilometers on the Tar River main stem. The hydraulic models were calibrated to the most current stage-discharge relations at 11 long-term streamgages where rating curves were available. Medium- to high-flow discharge measurements were made at some of the sites without rating curves, and high-water marks from Hurricanes Fran and Floyd were available for high-stage calibration. Simulated rating curves matched measured curves over the full range of flows. Differences between measured and simulated water levels for a specified flow were no more than 0.44 meter and typically were less. The calibrated models were used to generate a set of water-surface profiles for each of the 11 modeled reaches at 0.305-meter increments for water levels ranging from bankfull to approximately the highest recorded water level at the downstream-most gage in each modeled reach. Inundated areas were identified by subtracting the water-surface elevation in each 1.5-meter by 1.5-meter grid cell from the land-surface elevation in the cell through an automated routine that was developed to identify all inundated cells hydraulically connected to the cell at the downstream-most gage in the model domain. Inundation maps showing transportation networks and orthoimagery were prepared for display on the Internet. These maps also are linked to the U.S. Geological Survey North Carolina Water Science Center real-time streamflow website. Hence, a user can determine the near real-time stage and water-surface elevation at a U.S. Geological Survey streamgage site in the Tar River basin and link directly to the flood-inundation maps for a depiction of the estimated inundated area at the current water level. Although the flood-inundation maps represent distinct boundaries of inundated areas, some uncertainties are associated with these maps. These are uncertainties in the topographic data for the hydraulic model computational grid and inundation maps, effective friction values (Manning's n), model-validation data, and forecast hydrographs, if used. The Tar River flood-inundation maps were developed by using a steady-flow hydraulic model. This assumption clearly has less of an effect on inundation maps produced for low flows than for high flows when it typically takes more time to inundate areas. A flood in which water levels peak and fall slowly most likely will result in more inundation than a similar flood in which water levels peak and fall quickly. Limitations associated with the steady-flow assumption for hydraulic modeling vary from site to site. The one-dimensional modeling approach used in this study resulted in good agreement between measurements and simulations. T

  16. Feature Selection Methods for Zero-Shot Learning of Neural Activity.

    PubMed

    Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  17. Discrimination of Active and Weakly Active Human BACE1 Inhibitors Using Self-Organizing Map and Support Vector Machine.

    PubMed

    Li, Hang; Wang, Maolin; Gong, Ya-Nan; Yan, Aixia

    2016-01-01

    β-secretase (BACE1) is an aspartyl protease, which is considered as a novel vital target in Alzheimer`s disease therapy. We collected a data set of 294 BACE1 inhibitors, and built six classification models to discriminate active and weakly active inhibitors using Kohonen's Self-Organizing Map (SOM) method and Support Vector Machine (SVM) method. Each molecular descriptor was calculated using the program ADRIANA.Code. We adopted two different methods: random method and Self-Organizing Map method, for training/test set split. The descriptors were selected by F-score and stepwise linear regression analysis. The best SVM model Model2C has a good prediction performance on test set with prediction accuracy, sensitivity (SE), specificity (SP) and Matthews correlation coefficient (MCC) of 89.02%, 90%, 88%, 0.78, respectively. Model 1A is the best SOM model, whose accuracy and MCC of the test set were 94.57% and 0.98, respectively. The lone pair electronegativity and polarizability related descriptors importantly contributed to bioactivity of BACE1 inhibitor. The Extended-Connectivity Finger-Prints_4 (ECFP_4) analysis found some vitally key substructural features, which could be helpful for further drug design research. The SOM and SVM models built in this study can be obtained from the authors by email or other contacts.

  18. Structural Bioinformatics-Based Prediction of Exceptional Selectivity of p38 MAP Kinase Inhibitor PH-797804

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Li; Shieh, Huey S.; Selness, Shaun R.

    2009-07-24

    PH-797804 is a diarylpyridinone inhibitor of p38{alpha} mitogen-activated protein (MAP) kinase derived from a racemic mixture as the more potent atropisomer (aS), first proposed by molecular modeling and subsequently confirmed by experiments. On the basis of structural comparison with a different biaryl pyrazole template and supported by dozens of high-resolution crystal structures of p38{alpha} inhibitor complexes, PH-797804 is predicted to possess a high level of specificity across the broad human kinase genome. We used a structural bioinformatics approach to identify two selectivity elements encoded by the TXXXG sequence motif on the p38{alpha} kinase hinge: (i) Thr106 that serves as themore » gatekeeper to the buried hydrophobic pocket occupied by 2,4-difluorophenyl of PH-797804 and (ii) the bidentate hydrogen bonds formed by the pyridinone moiety with the kinase hinge requiring an induced 180{sup o} rotation of the Met109-Gly110 peptide bond. The peptide flip occurs in p38{alpha} kinase due to the critical glycine residue marked by its conformational flexibility. Kinome-wide sequence mining revealed rare presentation of the selectivity motif. Corroboratively, PH-797804 exhibited exceptionally high specificity against MAP kinases and the related kinases. No cross-reactivity was observed in large panels of kinase screens (selectivity ratio of >500-fold). In cellular assays, PH-797804 demonstrated superior potency and selectivity consistent with the biochemical measurements. PH-797804 has met safety criteria in human phase I studies and is under clinical development for several inflammatory conditions. Understanding the rationale for selectivity at the molecular level helps elucidate the biological function and design of specific p38{alpha} kinase inhibitors.« less

  19. Preliminary northeast Asia geodynamics map

    USGS Publications Warehouse

    Parfenov, Leonid M.; Khanchuk, Alexander I.; Badarch, Gombosuren; Miller, Robert J.; Naumova, Vera V.; Nokleberg, Warren J.; Ogasawara, Masatsugu; Prokopiev, Andrei V.; Yan, Hongquan

    2003-01-01

    This map portrays the geodynamics of Northeast Asia at a scale of 1:5,000,000 using the concepts of plate tectonics and analysis of terranes and overlap assemblages. The map is the result of a detailed compilation and synthesis at 5 million scale and is part of a major international collaborative study of the Mineral Resources, Metallogenesis, and Tectonics of Northeast Asia conducted from 1997 through 2002 by geologists from earth science agencies and universities in Russia, Mongolia, Northeastern China, South Korea, Japan, and the USA. This map is the result of extensive geologic mapping and associated tectonic studies in Northeast Asia in the last few decades and is the first collaborative compilation of the geology of the region at a scale of 1:5,000,000 by geologists from Russia, Mongolia, Northeastern China, South Korea, Japan, and the USA. The map was compiled by a large group of international geologists using the below concepts and definitions during collaborative workshops over a six-year period. The map is a major new compilation and re-interpretation of pre-existing geologic maps of the region. The map is designed to be used for several purposes, including regional tectonic analyses, mineral resource and metallogenic analysis, petroleum resource analysis, neotectonic analysis, and analysis of seismic hazards and volcanic hazards. The map consists of two sheets. Sheet 1 displays the map at a scale of 1:5,000,000, explanation. Sheet 2 displays the introduction, list of map units, and source references. Detailed descriptions of map units and stratigraphic columns are being published separately. This map is one of a series of publications on the mineral resources, metallogenesis, and geodynamics,of Northeast Asia. Companion studies and other articles and maps , and various detailed reports are: (1) a compilation of major mineral deposit models (Rodionov and Nokleberg, 2000; Rodionov and others, 2000; Obolenskiy and others, in press a); (2) a series of metallogenic belt maps (Obolenskiy and others, 2001; in press b); (3) a lode mineral deposits and placer districts location map for Northeast Asia (Ariunbileg and others, in press b); (4) descriptions of metallogenic belts (Rodionov and others, in press); and (5) a database on significant metalliferous and selected nonmetalliferous lode deposits, and selected placer districts (Ariunbileg and others, in press a).

  20. A Bayesian-Based Novel Methodology to Generate Reliable Site Response Mapping Sensitive to Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Chakraborty, A.; Goto, H.

    2017-12-01

    The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.

  1. Calibration and Validation of Tundra Plant Functional Type Fractional Cover Mapping

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Nelson, P.; Frost, G. V., Jr.

    2017-12-01

    Fractional cover maps are being developed for selected tundra plant functional types (PFTs) across >500,000 sq. km of arctic Alaska and adjacent Canada at 30 m resolution. Training and validation data include a field-based training dataset based on point-intercept sampling method at hundreds of plots spanning bioclimatic and geomorphic gradients. We also compiled 50 blocks of 1-5 cm resolution RGB image mosaics in Alaska (White Mountains, North Slope, and Yukon-Kuskokwim Delta) and the Yukon Territory. The mosaics and associated surface and canopy height models were developed using a consumer drone and structure from motion processing. We summarized both the in situ measurements and drone imagery to determine cover of two PFTs: Low and Tall Deciduous Shrub, and Light Fruticose/Foliose Lichen. We applied these data to train 2 m (limited extent) and 30 m (wall to wall) maps of PFT fractional cover for shrubs and lichen. Predictors for 2 m models were commercial satellite imagery such as WorldView-2 and Worldview-3, analyzed on the ABoVE Science Cloud. Predictors for 30 m models were primarily reflectance composites and spectral metrics developed from Landsat imagery, using Google Earth Engine. We compared the performance of models developed from the in situ and drone-derived training data and identify best practices to improve the performance and efficiency of arctic PFT fractional cover mapping.

  2. ICA model order selection of task co-activation networks.

    PubMed

    Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R

    2013-01-01

    Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.

  3. ICA model order selection of task co-activation networks

    PubMed Central

    Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.

    2013-01-01

    Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802

  4. CoMFA analyses of C-2 position salvinorin A analogs at the kappa-opioid receptor provides insights into epimer selectivity.

    PubMed

    McGovern, Donna L; Mosier, Philip D; Roth, Bryan L; Westkaemper, Richard B

    2010-04-01

    The highly potent and kappa-opioid (KOP) receptor-selective hallucinogen Salvinorin A and selected analogs have been analyzed using the 3D quantitative structure-affinity relationship technique Comparative Molecular Field Analysis (CoMFA) in an effort to derive a statistically significant and predictive model of salvinorin affinity at the KOP receptor and to provide additional statistical support for the validity of previously proposed structure-based interaction models. Two CoMFA models of Salvinorin A analogs substituted at the C-2 position are presented. Separate models were developed based on the radioligand used in the kappa-opioid binding assay, [(3)H]diprenorphine or [(125)I]6 beta-iodo-3,14-dihydroxy-17-cyclopropylmethyl-4,5 alpha-epoxymorphinan ([(125)I]IOXY). For each dataset, three methods of alignment were employed: a receptor-docked alignment derived from the structure-based docking algorithm GOLD, another from the ligand-based alignment algorithm FlexS, and a rigid realignment of the poses from the receptor-docked alignment. The receptor-docked alignment produced statistically superior results compared to either the FlexS alignment or the realignment in both datasets. The [(125)I]IOXY set (Model 1) and [(3)H]diprenorphine set (Model 2) gave q(2) values of 0.592 and 0.620, respectively, using the receptor-docked alignment, and both models produced similar CoMFA contour maps that reflected the stereoelectronic features of the receptor model from which they were derived. Each model gave significantly predictive CoMFA statistics (Model 1 PSET r(2)=0.833; Model 2 PSET r(2)=0.813). Based on the CoMFA contour maps, a binding mode was proposed for amine-containing Salvinorin A analogs that provides a rationale for the observation that the beta-epimers (R-configuration) of protonated amines at the C-2 position have a higher affinity than the corresponding alpha-epimers (S-configuration). (c) 2010. Published by Elsevier Inc.

  5. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.

    2018-05-01

    The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.

  6. Modeling Martian Dust Using Mars-GRAM

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.

    2010-01-01

    Engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). From the surface to 80 km altitude, Mars-GRAM is based on NASA Ames Mars General Circulation Model (MGCM). Mars-GRAM and MGCM use surface topography from Mars Global Surveyor Mars Orbiter Laser Altimeter (MOLA), with altitudes referenced to the MOLA areoid, or constant potential surface. Traditional Mars-GRAM options for representing the mean atmosphere along entry corridors include: TES Mapping Years 1 and 2, with Mars-GRAM data coming from MGCM model results driven by observed TES dust optical depth TES Mapping Year 0, with user-controlled dust optical depth and Mars-GRAM data interpolated from MGCM model results driven by selected values of globally-uniform dust optical depth. Mars-GRAM 2005 has been validated against Radio Science data, and both nadir and limb data from the Thermal Emission Spectrometer (TES).

  7. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  8. Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms

    NASA Astrophysics Data System (ADS)

    Samanta, A.; Todd, L. A.

    A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.

  9. Landslide susceptibility modeling in a landslide prone area in Mazandarn Province, north of Iran: a comparison between GLM, GAM, MARS, and M-AHP methods

    NASA Astrophysics Data System (ADS)

    Pourghasemi, Hamid Reza; Rossi, Mauro

    2017-10-01

    Landslides are identified as one of the most important natural hazards in many areas throughout the world. The essential purpose of this study is to compare general linear model (GLM), general additive model (GAM), multivariate adaptive regression spline (MARS), and modified analytical hierarchy process (M-AHP) models and assessment of their performances for landslide susceptibility modeling in the west of Mazandaran Province, Iran. First, landslides were identified by interpreting aerial photographs, and extensive field works. In total, 153 landslides were identified in the study area. Among these, 105 landslides were randomly selected as training data (i.e. used in the models training) and the remaining 48 (30 %) cases were used for the validation (i.e. used in the models validation). Afterward, based on a deep literature review on 220 scientific papers (period between 2005 and 2012), eleven conditioning factors including lithology, land use, distance from rivers, distance from roads, distance from faults, slope angle, slope aspect, altitude, topographic wetness index (TWI), plan curvature, and profile curvature were selected. The Certainty Factor (CF) model was used for managing uncertainty in rule-based systems and evaluation of the correlation between the dependent (landslides) and independent variables. Finally, the landslide susceptibility zonation was produced using GLM, GAM, MARS, and M-AHP models. For evaluation of the models, the area under the curve (AUC) method was used and both success and prediction rate curves were calculated. The evaluation of models for GLM, GAM, and MARS showed 90.50, 88.90, and 82.10 % for training data and 77.52, 70.49, and 78.17 % for validation data, respectively. Furthermore, The AUC value of the produced landslide susceptibility map using M-AHP showed a training value of 77.82 % and validation value of 82.77 % accuracy. Based on the overall assessments, the proposed approaches showed reasonable results for landslide susceptibility mapping in the study area. Moreover, results obtained showed that the M-AHP model performed slightly better than the MARS, GLM, and GAM models in prediction. These algorithms can be very useful for landslide susceptibility and hazard mapping and land use planning in regional scale.

  10. A versatile data-visualization application for the Norwegian flood forecasting service

    NASA Astrophysics Data System (ADS)

    Kobierska, Florian; Langsholt, Elin G.; Hamududu, Byman H.; Engeland, Kolbjørn

    2017-04-01

    - General motivation A graphical user interface has been developed to visualize multi-model hydrological forecasts at the flood forecasting service of the Norwegian water and energy directorate. It is based on the R 'shiny' package, with which interactive web applications can quickly be prototyped. The app queries multiple data sources, building a comprehensive infographics dashboard for the decision maker. - Main features of the app The visualization application comprises several tabs, each built with different functionality and focus. A map of forecast stations gives a rapid insight of the flood situation and serves, concurrently, as a map station selection (based on the 'leaflet' package). The map selection is linked to multi-panel forecast plots which can present input, state or runoff parameters. Another tab focuses on past model performance and calibration runs. - Software design choices The application was programmed with a focus on flexibility regarding data-sources. The parsing of text-based model results was explicitly separated from the app (in the separate R package 'NVEDATA'), so that it only loads standardized RData binary files. We focused on allowing re-usability in other contexts by structuring the app into specific 'shiny' modules. The code was bundled into an R package, which is available on GitHub. - Documentation efforts A documentation website is under development. For easier collaboration, we chose to host it on the 'GitHub Pages' branch of the repository and build it automatically with a continuous integration service. The aim is to gather all information about the flood forecasting methodology at NVE in one location. This encompasses details on each hydrological model used as well as the documentation of the data-visualization application. - Outlook for further development The ability to select a group of stations by filtering a table (i.e. past performance, past major flood events, catchment parameters) and exporting it to the forecast tab could be of interest for detailed model analysis. The design choices for this app were motivated by a need for extensibility and modularity and those qualities will be tested and improved as new datasets need integrating into this to​ol.

  11. Mapping of ligand-binding cavities in proteins.

    PubMed

    Andersson, C David; Chen, Brian Y; Linusson, Anna

    2010-05-01

    The complex interactions between proteins and small organic molecules (ligands) are intensively studied because they play key roles in biological processes and drug activities. Here, we present a novel approach to characterize and map the ligand-binding cavities of proteins without direct geometric comparison of structures, based on Principal Component Analysis of cavity properties (related mainly to size, polarity, and charge). This approach can provide valuable information on the similarities and dissimilarities, of binding cavities due to mutations, between-species differences and flexibility upon ligand-binding. The presented results show that information on ligand-binding cavity variations can complement information on protein similarity obtained from sequence comparisons. The predictive aspect of the method is exemplified by successful predictions of serine proteases that were not included in the model construction. The presented strategy to compare ligand-binding cavities of related and unrelated proteins has many potential applications within protein and medicinal chemistry, for example in the characterization and mapping of "orphan structures", selection of protein structures for docking studies in structure-based design, and identification of proteins for selectivity screens in drug design programs. 2009 Wiley-Liss, Inc.

  12. Mapping of Ligand-Binding Cavities in Proteins

    PubMed Central

    Andersson, C. David; Chen, Brian Y.; Linusson, Anna

    2010-01-01

    The complex interactions between proteins and small organic molecules (ligands) are intensively studied because they play key roles in biological processes and drug activities. Here, we present a novel approach to characterise and map the ligand-binding cavities of proteins without direct geometric comparison of structures, based on Principal Component Analysis of cavity properties (related mainly to size, polarity and charge). This approach can provide valuable information on the similarities, and dissimilarities, of binding cavities due to mutations, between-species differences and flexibility upon ligand-binding. The presented results show that information on ligand-binding cavity variations can complement information on protein similarity obtained from sequence comparisons. The predictive aspect of the method is exemplified by successful predictions of serine proteases that were not included in the model construction. The presented strategy to compare ligand-binding cavities of related and unrelated proteins has many potential applications within protein and medicinal chemistry, for example in the characterisation and mapping of “orphan structures”, selection of protein structures for docking studies in structure-based design and identification of proteins for selectivity screens in drug design programs. PMID:20034113

  13. A guide for the use of digital elevation model data for making soil surveys

    USGS Publications Warehouse

    Klingebiel, A.A.; Horvath, Emil H.; Reybold, William U.; Moore, D.G.; Fosnight, E.A.; Loveland, Thomas R.

    1988-01-01

    The intent of this publication is twofold: (1) to serve as a user guide for soil scientists and others interested in learning about the value and use of digital elevation model (DEM) data in making soil surveys and (2) to provide documentation of the Soil Landscape Analysis Project (SLAP). This publication provides a step-by-step guide on how digital slope-class maps are adjusted to topographic maps and orthophotoquads to obtain accurate slope-class maps, and how these derivative maps can be used as a base for soil survey premaps. In addition, guidance is given on the use of aspect-class maps and other resource data in making pre-maps. The value and use of tabular summaries are discussed. Examples of the use of DEM products by the authors and by selected field soil scientists are also given. Additional information on SLAP procedures may be obtained from USDA, Soil Conservation Service, Soil Survey Division, P.O. Box 2890, Washington, D.C. 20013, and from references (Horvath and others, 1987; Horvath and others, 1983; Klingebiel and others, 1987; and Young, 1987) listed in this publication. The slope and aspect products and the procedures for using these products have evolved during 5 years of cooperative research with the USDA, Soil Conservation Service and Forest Service, and the USDI, Bureau of Land Management.

  14. Building phenotype networks to improve QTL detection: a comparative analysis of fatty acid and fat traits in pigs.

    PubMed

    Yang, B; Navarro, N; Noguera, J L; Muñoz, M; Guo, T F; Yang, K X; Ma, J W; Folch, J M; Huang, L S; Pérez-Enciso, M

    2011-10-01

    Models in QTL mapping can be improved by considering all potential variables, i.e. we can use remaining traits other than the trait under study as potential predictors. QTL mapping is often conducted by correcting for a few fixed effects or covariates (e.g. sex, age), although many traits with potential causal relationships between them are recorded. In this work, we evaluate by simulation several procedures to identify optimum models in QTL scans: forward selection, undirected dependency graph and QTL-directed dependency graph (QDG). The latter, QDG, performed better in terms of power and false discovery rate and was applied to fatty acid (FA) composition and fat deposition traits in two pig F2 crosses from China and Spain. Compared with the typical QTL mapping, QDG approach revealed several new QTL. To the contrary, several FA QTL on chromosome 4 (e.g. Palmitic, C16:0; Stearic, C18:0) detected by typical mapping vanished after adjusting for phenotypic covariates in QDG mapping. This suggests that the QTL detected in typical mapping could be indirect. When a QTL is supported by both approaches, there is an increased confidence that the QTL have a primary effect on the corresponding trait. An example is a QTL for C16:1 on chromosome 8. In conclusion, mapping QTL based on causal phenotypic networks can increase power and help to make more biologically sound hypothesis on the genetic architecture of complex traits. © 2011 Blackwell Verlag GmbH.

  15. Significant Metalliferous and Selected Non-Metalliferous Lode Deposits, and Selected Placer Districts of Northeast Asia

    USGS Publications Warehouse

    Ariunbileg, Sodov; Biryul'kin, Gennandiy V.; Byamba, Jamba; Davydov, Yury V.; Dejidmaa, Gunchin; Distanov, Elimir G.; Dorjgotov, Dangindorjiin; Gamyanin, Gennadiy N.; Gerel, Ochir; Fridovskiy, Valeriy Y.; Gotovsuren, Ayurzana; Hwang, Duk-Hwan; Kochnev, Anatoliy P.; Kostin, Alexei V.; Kuzmin, Mikhail I.; Letunov, Sergey A.; Jiliang, Li; Xujun, Li; Malceva, Galina D.; Melnikov, V.D.; Nikitin, Valeriy; Obolenskiy, Alexander A.; Ogasawara, Masatsugu; Orolmaa, Demberel; Parfenov, Leonid M.; Popov, Nikolay V.; Prokopiev, Andrei V.; Ratkin, Vladimir; Rodionov, Sergey M.; Seminskiy, Zhan V.; Shpikerman, Vladimir I.; Smelov, Alexander P.; Sotnikov, Vitaly I.; Spiridonov, Alexander V.; Stogniy, Valeriy V.; Sudo, Sadahisa; Fengyue, Sun; Jiapeng, Sun; Weizhi, Sun; Supletsov, Valeriy M.; Timofeev, Vladimir F.; Tyan, Oleg A.; Vetluzhskikh, Valeriy G.; Aihua, Xi; Yakovlev, Yakov V.; Hongquan, Yan; Zhizhin, Vladimir I.; Zinchuk, Nikolay N.; Zorina, Lydia M.

    2003-01-01

    Introduction This report contains a digtial database on lode deposits and placer districts of Northeast Asia. This region includes Eastern Siberia, Russian Far East, Mongolia, Northeast China, South Korea, and Japan. In folders on this site are a detailed database, a bibliography of cited references, descriptions of mineral deposit models, and a mineral deposit location map. Data are provided for 1,674 significant lode deposits and 91 significant placer districts of the region.

  16. Modeling selective attention using a neuromorphic analog VLSI device.

    PubMed

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  17. Mapping Regional Impervious Surface Distribution from Night Time Light: The Variability across Global Cities

    NASA Astrophysics Data System (ADS)

    Lin, M.; Yang, Z.; Park, H.; Qian, S.; Chen, J.; Fan, P.

    2017-12-01

    Impervious surface area (ISA) has become an important indicator for studying urban environments, but mapping ISA at the regional or global scale is still challenging due to the complexity of impervious surface features. The Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) nighttime light data is (NTL) and Resolution Imaging Spectroradiometer (MODIS) are the major remote sensing data source for regional ISA mapping. A single regression relationship between fractional ISA and NTL or various index derived based on NTL and MODIS vegetation index (NDVI) data was established in many previous studies for regional ISA mapping. However, due to the varying geographical, climatic, and socio-economic characteristics of different cities, the same regression relationship may vary significantly across different cities in the same region in terms of both fitting performance (i.e. R2) and the rate of change (Slope). In this study, we examined the regression relationship between fractional ISA and Vegetation Adjusted Nighttime light Urban Index (VANUI) for 120 randomly selected cities around the world with a multilevel regression model. We found that indeed there is substantial variability of both the R2 (0.68±0.29) and slopes (0.64±0.40) among individual regressions, which suggests that multilevel/hierarchical models are needed for accuracy improvement of future regional ISA mapping .Further analysis also let us find the this substantial variability are affected by climate conditions, socio-economic status, and urban spatial structures. However, all these effects are nonlinear rather than linear, thus could not modeled explicitly in multilevel linear regression models.

  18. Synchrotron-based FTIR microspectroscopy for the mapping of photo-oxidation and additives in acrylonitrile-butadiene-styrene model samples and historical objects.

    PubMed

    Saviello, Daniela; Pouyet, Emeline; Toniolo, Lucia; Cotte, Marine; Nevin, Austin

    2014-09-16

    Synchrotron-based Fourier transform infrared micro-spectroscopy (SR-μFTIR) was used to map photo-oxidative degradation of acrylonitrile-butadiene-styrene (ABS) and to investigate the presence and the migration of additives in historical samples from important Italian design objects. High resolution (3×3 μm(2)) molecular maps were obtained by FTIR microspectroscopy in transmission mode, using a new method for the preparation of polymer thin sections. The depth of photo-oxidation in samples was evaluated and accompanied by the formation of ketones, aldehydes, esters, and unsaturated carbonyl compounds. This study demonstrates selective surface oxidation and a probable passivation of material against further degradation. In polymer fragments from design objects made of ABS from the 1960s, UV-stabilizers were detected and mapped, and microscopic inclusions of proteinaceous material were identified and mapped for the first time. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Mapping urban geology of the city of Girona, Catalonia

    NASA Astrophysics Data System (ADS)

    Vilà, Miquel; Torrades, Pau; Pi, Roser; Monleon, Ona

    2016-04-01

    A detailed and systematic geological characterization of the urban area of Girona has been conducted under the project '1:5000 scale Urban geological map of Catalonia' of the Catalan Geological Survey (Institut Cartogràfic i Geològic de Catalunya). The results of this characterization are organized into: i) a geological information system that includes all the information acquired; ii) a stratigraphic model focused on identification, characterization and correlation of the geological materials and structures present in the area and; iii) a detailed geological map that represents a synthesis of all the collected information. The mapping project integrates in a GIS environment pre-existing cartographic documentation (geological and topographical), core data from compiled boreholes, descriptions of geological outcrops within the urban network and neighbouring areas, physico-chemical characterisation of representative samples of geological materials, detailed geological mapping of Quaternary sediments, subsurface bedrock and artificial deposits and, 3D modelling of the main geological surfaces. The stratigraphic model is structured in a system of geological units that from a chronostratigrafic point of view are structured in Palaeozoic, Paleogene, Neogene, Quaternary and Anthropocene. The description of the geological units is guided by a systematic procedure. It includes the main lithological and structural features of the units that constitute the geological substratum and represents the conceptual base of the 1:5000 urban geological map of the Girona metropolitan area, which is organized into 6 map sheets. These map sheets are composed by a principal map, geological cross sections and, several complementary maps, charts and tables. Regardless of the geological map units, the principal map also represents the main artificial deposits, features related to geohistorical processes, contours of outcrop areas, information obtained in stations, borehole data, and contour lines of the top of the pre-Quaternary basement surface. The most representative complementary maps are the quaternary map, the subsurface bedrock map and the isopach map of thickness of superficial deposits (Quaternary and anthropogenic). The map sheets also include charts and tables of relevant physic-chemical parameters of the geological materials, harmonized downhole lithological columns from selected boreholes, stratigraphic columns, and, photographs and figures illustrating the geology of the mapped area and how urbanization has changed the natural environment. The development of systematic urban geological mapping projects, such as the example of Girona's case, which provides valuable resources to address targeted studies related to urban planning, geoengineering works, soil pollution and other important environmental issues that society should deal with in the future.

  20. Integrated assessment of future land use in Brazil under increasing demand for bioenergy

    NASA Astrophysics Data System (ADS)

    Verstegen, Judith; van der Hilst, Floor; Karssenberg, Derek; Faaij, André

    2014-05-01

    Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatiotemporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, four model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) an economic model to project the quantity of change of all land uses, 3) a spatially explicit land use model that determines the location of change of all land uses, and 4) various analysis to determine the impacts of these changes on water, socio-economics, and biodiversity. All four model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2012 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the shape of sugar cane and other land use patches are uncertain, not so much the location. From the economic model we can derive that dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. If the intensity of the livestock sector is not increased future projections show a large loss of natural vegetation. Impacts on water are not that large, except when irrigation is applied on the expanded cropland.

  1. Effects of Spatial and Non-Spatial Multi-Modal Cues on Orienting of Visual-Spatial Attention in an Augmented Environment

    DTIC Science & Technology

    2007-11-01

    information into awareness. Broadbent’s (1958) " Filter " model of attention (see Figure 1) maps the flow of information from the senses through a number of...benefits of an attentional cueing paradigm can be explained within these models . For example, the selective filter is augmented by the information...capacity filter ’, while Wickens’ model represents this with a limited amount of ’attentional resources’ available to perception, decision making

  2. Mapping Resting-State Brain Networks in Conscious Animals

    PubMed Central

    Zhang, Nanyin; Rane, Pallavi; Huang, Wei; Liang, Zhifeng; Kennedy, David; Frazier, Jean A.; King, Jean

    2010-01-01

    In the present study we mapped brain functional connectivity in the conscious rat at the “resting state” based on intrinsic blood-oxygenation-level dependent (BOLD) fluctuations. The conscious condition eliminated potential confounding effects of anesthetic agents on the connectivity between brain regions. Indeed, using correlational analysis we identified multiple cortical and subcortical regions that demonstrated temporally synchronous variation with anatomically well-defined regions that are crucial to cognitive and emotional information processing including the prefrontal cortex (PFC), thalamus and retrosplenial cortex. The functional connectivity maps created were stringently validated by controlling for false positive detection of correlation, the physiologic basis of the signal source, as well as quantitatively evaluating the reproducibility of maps. Taken together, the present study has demonstrated the feasibility of assessing functional connectivity in conscious animals using fMRI and thus provided a convenient and non-invasive tool to systematically investigate the connectional architecture of selected brain networks in multiple animal models. PMID:20382183

  3. Characterization of Mitogen-Activated Protein Kinase Expression in Nucleus Accumbens and Hippocampus of Rats Subjected to Food Selection in the Cafeteria Diet Protocol.

    PubMed

    Sarro-Ramírez, Andrea; Sánchez, Daniel; Tejeda-Padrón, Alma; Buenfil-Canto, Linda Vianey; Valladares-García, Jorge; Pacheco-Pantoja, Elda; Arias-Carrión, Oscar; Murillo-Rodríguez, Eric

    2016-01-01

    Obesity is a world-wide health problem that requires different experimental perspectives to understand the onset of this disease, including the neurobiological basis of food selection. From a molecular perspective, obesity has been related with activity of several endogenous molecules, including the mitogenactivated protein kinases (MAP-K). The aim of this study was to characterize MAP-K expression in hedonic and learning and memory brain-associated areas such as nucleus accumbens (AcbC) and hippocampus (HIPP) after food selection. We show that animals fed with cafeteria diet during 14 days displayed an increase in p38 MAP-K activity in AcbC if chose cheese. Conversely, a diminution was observed in animals that preferred chocolate in AcbC. Also, a decrease of p38 MAP-K phosphorylation was found in HIPP in rats that selected either cheese or chocolate. Our data demonstrate a putative role of MAP-K expression in food selection. These findings advance our understanding of neuromolecular basis engaged in obesity.

  4. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  5. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  6. Learning epistatic interactions from sequence-activity data to predict enantioselectivity.

    PubMed

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  7. Prior-knowledge-based spectral mixture analysis for impervious surface mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jinshui; He, Chunyang; Zhou, Yuyu

    2014-01-03

    In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, lowmore » albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.« less

  8. Density and Reproductive Success of California Towhees

    Treesearch

    Kathryn L. Purcell; Jared Verner

    1998-01-01

    Models of habitat selection commonly asume that higher-quality source habitats will be occupied at higher densities than sink habitats. We examined an apparent sink habitat for California Towhees (Pipilo crissalis) in which densities are greater than in nearby source habitats. We estimated territory density using spot-mapping and monitored nests of towhees in grazed...

  9. Computers in Biological Education: Simulation Approaches. Genetics and Evolution. CAL Research Group Technical Report No. 13.

    ERIC Educational Resources Information Center

    Murphy, P. J.

    Three examples of genetics and evolution simulation concerning Mendelian inheritance, genetic mapping, and natural selection are used to illustrate the use of simulations in modeling scientific/natural processes. First described is the HERED series, which illustrates such phenomena as incomplete dominance, multiple alleles, lethal alleles,…

  10. Additional Samples: Where They Should Be Located

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilger, G. G., E-mail: jfelipe@ufrgs.br; Costa, J. F. C. L.; Koppe, J. C.

    2001-09-15

    Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additionalmore » sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.« less

  11. Z-Selective Metathesis Homocoupling of 1,3-Dienes by Molybdenum and Tungsten Monoaryloxide Pyrrolide (MAP) Complexes

    PubMed Central

    Townsend, Erik M.; Schrock, Richard R.; Hoveyda, Amir H.

    2012-01-01

    Molybdenum or tungsten MAP complexes that contain OHIPT as the aryloxide (hexaisopropylterphenoxide) are effective catalysts for homocoupling of simple (E)-1,3-dienes to give (E,Z,E)-trienes in high yield and with high Z selectivities. A vinylalkylidene MAP species was shown to have the expected syn structure in an X-ray study. MAP catalysts that contain OHMT (hexamethylterphenoxide) are relatively inefficient. PMID:22734508

  12. OxfordGrid: a web interface for pairwise comparative map views.

    PubMed

    Yang, Hongyu; Gingle, Alan R

    2005-12-01

    OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.

  13. Quantitative structure-activity relationships of selective antagonists of glucagon receptor using QuaSAR descriptors.

    PubMed

    Manoj Kumar, Palanivelu; Karthikeyan, Chandrabose; Hari Narayana Moorthy, Narayana Subbiah; Trivedi, Piyush

    2006-11-01

    In the present paper, quantitative structure activity relationship (QSAR) approach was applied to understand the affinity and selectivity of a novel series of triaryl imidazole derivatives towards glucagon receptor. Statistically significant and highly predictive QSARs were derived for glucagon receptor inhibition by triaryl imidazoles using QuaSAR descriptors of molecular operating environment (MOE) employing computer-assisted multiple regression procedure. The generated QSAR models revealed that factors related to hydrophobicity, molecular shape and geometry predominantly influences glucagon receptor binding affinity of the triaryl imidazoles indicating the relevance of shape specific steric interactions between the molecule and the receptor. Further, QSAR models formulated for selective inhibition of glucagon receptor over p38 mitogen activated protein (MAP) kinase of the compounds in the series highlights that the same structural features, which influence the glucagon receptor affinity, also contribute to their selective inhibition.

  14. Biomolecular structure manipulation using tailored electromagnetic radiation: a proof of concept on a simplified model of the active site of bacterial DNA topoisomerase.

    PubMed

    Jarukanont, Daungruthai; Coimbra, João T S; Bauerhenne, Bernd; Fernandes, Pedro A; Patel, Shekhar; Ramos, Maria J; Garcia, Martin E

    2014-10-21

    We report on the viability of breaking selected bonds in biological systems using tailored electromagnetic radiation. We first demonstrate, by performing large-scale simulations, that pulsed electric fields cannot produce selective bond breaking. Then, we present a theoretical framework for describing selective energy concentration on particular bonds of biomolecules upon application of tailored electromagnetic radiation. The theory is based on the mapping of biomolecules to a set of coupled harmonic oscillators and on optimal control schemes to describe optimization of temporal shape, the phase and polarization of the external radiation. We have applied this theory to demonstrate the possibility of selective bond breaking in the active site of bacterial DNA topoisomerase. For this purpose, we have focused on a model that was built based on a case study. Results are given as a proof of concept.

  15. Structure-based design, synthesis, and biological evaluation of imidazo[1,2-b]pyridazine-based p38 MAP kinase inhibitors.

    PubMed

    Kaieda, Akira; Takahashi, Masashi; Takai, Takafumi; Goto, Masayuki; Miyazaki, Takahiro; Hori, Yuri; Unno, Satoko; Kawamoto, Tomohiro; Tanaka, Toshimasa; Itono, Sachiko; Takagi, Terufumi; Hamada, Teruki; Shirasaki, Mikio; Okada, Kengo; Snell, Gyorgy; Bragstad, Ken; Sang, Bi-Ching; Uchikawa, Osamu; Miwatashi, Seiji

    2018-02-01

    We identified novel potent inhibitors of p38 MAP kinase using structure-based design strategy. X-ray crystallography showed that when p38 MAP kinase is complexed with TAK-715 (1) in a co-crystal structure, Phe169 adopts two conformations, where one interacts with 1 and the other shows no interaction with 1. Our structure-based design strategy shows that these two conformations converge into one via enhanced protein-ligand hydrophobic interactions. According to the strategy, we focused on scaffold transformation to identify imidazo[1,2-b]pyridazine derivatives as potent inhibitors of p38 MAP kinase. Among the herein described and evaluated compounds, N-oxide 16 exhibited potent inhibition of p38 MAP kinase and LPS-induced TNF-α production in human monocytic THP-1 cells, and significant in vivo efficacy in rat collagen-induced arthritis models. In this article, we report the discovery of potent, selective and orally bioavailable imidazo[1,2-b]pyridazine-based p38 MAP kinase inhibitors with pyridine N-oxide group. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Integrating spatially explicit indices of abundance and habitat quality: an applied example for greater sage-grouse management

    USGS Publications Warehouse

    Coates, Peter S.; Casazza, Michael L.; Ricca, Mark A.; Brussee, Brianne E.; Blomberg, Erik J.; Gustafson, K. Benjamin; Overton, Cory T.; Davis, Dawn M.; Niell, Lara E.; Espinosa, Shawn P.; Gardner, Scott C.; Delehanty, David J.

    2016-01-01

    Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management. Greater sage-grouse Centrocercus urophasianus, hereafter “sage-grouse” populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize use of available information. Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution, and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by > 35 500 independent telemetry locations from > 1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes. We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and applications. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance, and space use derived from multiple data sources yields a composite map that can guide effective allocation of management intensity across multiple spatial scales. As applied to sage-grouse, the composite map identifies spatially explicit management categories within sagebrush steppe that are most critical to sustaining sage-grouse populations as well as those areas where changes in land use would likely have minimal impact. Importantly, collaborative efforts among stakeholders guide which intersections of habitat selection indices and abundance and space use classes are used to define management categories. Because sage-grouse are an umbrella species, our joint-index modelling approach can help target effective conservation for other sagebrush obligate species, and can be readily applied to species in other ecosystems with similar life histories, such as central-placed breeding.

  17. Integrating spatially explicit indices of abundance and habitat quality: an applied example for greater sage-grouse management.

    PubMed

    Coates, Peter S; Casazza, Michael L; Ricca, Mark A; Brussee, Brianne E; Blomberg, Erik J; Gustafson, K Benjamin; Overton, Cory T; Davis, Dawn M; Niell, Lara E; Espinosa, Shawn P; Gardner, Scott C; Delehanty, David J

    2016-02-01

    Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management.Greater sage-grouse Centrocercus urophasianus , hereafter 'sage-grouse' populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize the use of available information.Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by >35 500 independent telemetry locations from >1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes.We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of the following: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and application s. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance and space use derived from multiple data sources yields a composite map that can guide effective allocation of management intensity across multiple spatial scales. As applied to sage-grouse, the composite map identifies spatially explicit management categories within sagebrush steppe that are most critical to sustaining sage-grouse populations as well as those areas where changes in land use would likely have minimal impact. Importantly, collaborative efforts among stakeholders guide which intersections of habitat selection indices and abundance and space use classes are used to define management categories. Because sage-grouse are an umbrella species, our joint-index modelling approach can help target effective conservation for other sagebrush obligate species and can be readily applied to species in other ecosystems with similar life histories, such as central-placed breeding.

  18. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  19. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed Central

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-01-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974

  20. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of varying sizes to illustrate extreme values to non-scientists. We explained the importance of nonlinearity in the relationship between shaking and loss. This was the second time SAFRR encountered skeptics of 3-D maps among scenario consumers, so a short manuscript was prepared that would serve similar uses in the future.

  1. Evaluation of vector coastline features extracted from 'structure from motion'-derived elevation data

    USGS Publications Warehouse

    Kinsman, Nicole; Gibbs, Ann E.; Nolan, Matt

    2015-01-01

    For extensive and remote coastlines, the absence of high-quality elevation models—for example, those produced with lidar—leaves some coastal populations lacking one of the essential elements for mapping shoreline positions or flood extents. Here, we compare seven different elevation products in a lowlying area in western Alaska to establish their appropriateness for coastal mapping applications that require the delineation of elevation-based vectors. We further investigate the effective use of a Structure from Motion (SfM)-derived surface model (vertical RMSE<20 cm) by generating a tidal datum-based shoreline and an inundation extent map for a 2011 flood event. Our results suggest that SfM-derived elevation products can yield elevation-based vector features that have horizontal positional uncertainties comparable to those derived from other techniques. We also provide a rule-of-thumb equation to aid in the selection of minimum elevation model specifications based on terrain slope, vertical uncertainties, and desired horizontal accuracy.

  2. Stochastic DT-MRI connectivity mapping on the GPU.

    PubMed

    McGraw, Tim; Nadar, Mariappan

    2007-01-01

    We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.

  3. Predicted contextual modulation varies with distance from pinwheel centers in the orientation preference map

    PubMed Central

    Okamoto, Tsuyoshi; Ikezoe, Koji; Tamura, Hiroshi; Watanabe, Masataka; Aihara, Kazuyuki; Fujita, Ichiro

    2011-01-01

    In the primary visual cortex (V1) of some mammals, columns of neurons with the full range of orientation preferences converge at the center of a pinwheel-like arrangement, the ‘pinwheel center' (PWC). Because a neuron receives abundant inputs from nearby neurons, the neuron's position on the cortical map likely has a significant impact on its responses to the layout of orientations inside and outside its classical receptive field (CRF). To understand the positional specificity of responses, we constructed a computational model based on orientation preference maps in monkey V1 and hypothetical neuronal connections. The model simulations showed that neurons near PWCs displayed weaker but detectable orientation selectivity within their CRFs, and strongly reduced contextual modulation from extra-CRF stimuli, than neurons distant from PWCs. We suggest that neurons near PWCs robustly extract local orientation within their CRF embedded in visual scenes, and that contextual information is processed in regions distant from PWCs. PMID:22355631

  4. Search strategy selection in the Morris water maze indicates allocentric map formation during learning that underpins spatial memory formation.

    PubMed

    Rogers, Jake; Churilov, Leonid; Hannan, Anthony J; Renoir, Thibault

    2017-03-01

    Using a Matlab classification algorithm, we demonstrate that a highly salient distal cue array is required for significantly increased likelihoods of spatial search strategy selection during Morris water maze spatial learning. We hypothesized that increased spatial search strategy selection during spatial learning would be the key measure demonstrating the formation of an allocentric map to the escape location. Spatial memory, as indicated by quadrant preference for the area of the pool formally containing the hidden platform, was assessed as the main measure that this allocentric map had formed during spatial learning. Our C57BL/6J wild-type (WT) mice exhibit quadrant preference in the highly salient cue paradigm but not the low, corresponding with a 120% increase in the odds of a spatial search strategy selection during learning. In contrast, quadrant preference remains absent in serotonin 1A receptor (5-HT 1A R) knockout (KO) mice, who exhibit impaired search strategy selection during spatial learning. Additionally, we also aimed to assess the impact of the quality of the distal cue array on the spatial learning curves of both latency to platform and path length using mixed-effect regression models and found no significant associations or interactions. In contrast, we demonstrated that the spatial learning curve for search strategy selection was absent during training in the low saliency paradigm. Therefore, we propose that allocentric search strategy selection during spatial learning is the learning parameter in mice that robustly indicates the formation of a cognitive map for the escape goal location. These results also suggest that both latency to platform and path length spatial learning curves do not discriminate between allocentric and egocentric spatial learning and do not reliably predict spatial memory formation. We also show that spatial memory, as indicated by the absolute time in the quadrant formerly containing the hidden platform alone (without reference to the other areas of the pool), was not sensitive to cue saliency or impaired in 5-HT 1A R KO mice. Importantly, in the absence of a search strategy analysis, this suggests that to establish that the Morris water maze has worked (i.e. control mice have formed an allocentric map to the escape goal location), a measure of quadrant preference needs to be reported to establish spatial memory formation. This has implications for studies that claim hippocampal functioning is impaired using latency to platform or path length differences within the existing Morris water maze literature. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Building regional early flood warning systems by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, F. J.; Chang, L. C.; Amin, M. Z. B. M.

    2017-12-01

    Building early flood warning system is essential for the protection of the residents against flood hazards and make actions to mitigate the losses. This study implements AI technology for forecasting multi-step-ahead regional flood inundation maps during storm events. The methodology includes three major schemes: (1) configuring the self-organizing map (SOM) to categorize a large number of regional inundation maps into a meaningful topology; (2) building dynamic neural networks to forecast multi-step-ahead average inundated depths (AID); and (3) adjusting the weights of the selected neuron in the constructed SOM based on the forecasted AID to obtain real-time regional inundation maps. The proposed models are trained, and tested based on a large number of inundation data sets collected in regions with the most frequent and serious flooding in the river basin. The results appear that the SOM topological relationships between individual neurons and their neighbouring neurons are visible and clearly distinguishable, and the hybrid model can continuously provide multistep-ahead visible regional inundation maps with high resolution during storm events, which have relatively small RMSE values and high R2 as compared with numerical simulation data sets. The computing time is only few seconds, and thereby leads to real-time regional flood inundation forecasting and make early flood inundation warning system. We demonstrate that the proposed hybrid ANN-based model has a robust and reliable predictive ability and can be used for early warning to mitigate flood disasters.

  6. Effects of urban microcellular environments on ray-tracing-based coverage predictions.

    PubMed

    Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing

    2016-09-01

    The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.

  7. Uncertainty assessment of future land use in Brazil under increasing demand for bioenergy

    NASA Astrophysics Data System (ADS)

    van der Hilst, F.; Verstegen, J. A.; Karssenberg, D.; Faaij, A.

    2013-12-01

    Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatio-temporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, three model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) a model to project the quantity of change of all land uses, and 3) a spatially explicit land use model that determines the location of change of all land uses. All three model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2011 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the locations of pastures are uncertain. Since the dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. Results of the data assimilation indicate that the drivers of location of the land uses vary over time (variations up to 50% in the importance of the drivers) making it difficult to find a solid stationary system representation. Overall, we conclude that projection up to 2030 is only of use for quantifying impacts that act on a larger aggregation level, because at local level uncertainty is too large.

  8. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  9. Predictive models and spatial variations of vital capacity in healthy people from 6 to 84 years old in China based on geographical factors.

    PubMed

    He, Jinwei; Ge, Miao; Wang, Congxia; Jiang, Naigui; Zhang, Mingxin; Yun, Pujun

    2014-07-01

    The aim of this study was to provide a scientific basic for a unified standard of the reference value of vital capacity (VC) of healthy subjects from 6 and 84 years old in China. The normal reference value of VC was correlated to seven geographical factors, including altitude (X1), annual duration of sunshine (X2), annual mean air temperature (X3), annual mean relative humidity (X4), annual precipitation amount (X5), annual air temperature range (X6) and annual mean wind speed (X7). Predictive models were established by five different linear and nonlinear methods. The best models were selected by t-test. The geographical distribution map of VC in different age groups can be interpolated by Kriging's method using ArcGIS software. It was found that the correlation of VC and geographical factors in China was quite significant, especially for both males and females aged from 6 to 45. The best models were built for different age groups. The geographical distribution map shows the spatial variations of VC in China precisely. The VC of healthy subjects can be simulated by the best model or acquired from the geographical distribution map provided the geographical factors for that city or county of China are known.

  10. Conditional Selection of Genomic Alterations Dictates Cancer Evolution and Oncogenic Dependencies.

    PubMed

    Mina, Marco; Raynaud, Franck; Tavernari, Daniele; Battistello, Elena; Sungalee, Stephanie; Saghafinia, Sadegh; Laessle, Titouan; Sanchez-Vega, Francisco; Schultz, Nikolaus; Oricchio, Elisa; Ciriello, Giovanni

    2017-08-14

    Cancer evolves through the emergence and selection of molecular alterations. Cancer genome profiling has revealed that specific events are more or less likely to be co-selected, suggesting that the selection of one event depends on the others. However, the nature of these evolutionary dependencies and their impact remain unclear. Here, we designed SELECT, an algorithmic approach to systematically identify evolutionary dependencies from alteration patterns. By analyzing 6,456 genomes from multiple tumor types, we constructed a map of oncogenic dependencies associated with cellular pathways, transcriptional readouts, and therapeutic response. Finally, modeling of cancer evolution shows that alteration dependencies emerge only under conditional selection. These results provide a framework for the design of strategies to predict cancer progression and therapeutic response. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. An Annotated Bibliography on Tactical Map Display Symbology

    DTIC Science & Technology

    1989-08-01

    failure of attention to be focused on one element selectively in filtering tasks where only that one element was relevant to the discrimination. Failure of...The present study evaluates a class of models of human information processing made popular by Broadbent . A brief tachistoscopic display of one or two...213-219. Two experiments were performed to test Neisser’s two-stage model of recognition as applied to matching. Evidence of parallel processing was

  12. Uncovering and Managing the Impact of Methodological Choices for the Computational Construction of Socio-Technical Networks from Texts

    DTIC Science & Technology

    2012-09-01

    supported by the National Science Foundation (NSF) IGERT 9972762, the Army Research Institute (ARI) W91WAW07C0063, the Army Research Laboratory (ARL/CTA...prediction models in AutoMap .................................................. 144   Figure 13: Decision Tree for prediction model selection in...generated for nationally funded initiatives and made available through the Linguistic Data Consortium (LDC). An overview of these datasets is provided in

  13. Rainfall thresholds and susceptibility mapping for shallow landslides and debris flows in Scotland

    NASA Astrophysics Data System (ADS)

    Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil

    2017-04-01

    Shallow translational slides and debris flows (hereafter 'landslides') pose a significant threat to life and cause significant annual economic impacts (e.g. by damage and disruption of infrastructure). The focus of this research is on the definition of objective rainfall thresholds using a weather radar system and landslide susceptibility mapping. In the study area Scotland, an inventory of 75 known landslides was used for the period 2003 to 2016. First, the effect of using different rain records (i.e. time series length) on two threshold selection techniques in receiver operating characteristic (ROC) analysis was evaluated. The results show that thresholds selected by 'Threat Score' (minimising false alarms) are sensitive to rain record length and which is not routinely considered, whereas thresholds selected using 'Optimal Point' (minimising failed alarms) are not; therefore these may be suited to establishing lower limit thresholds and be of interest to those developing early warning systems. Robust thresholds are found for combinations of normalised rain duration and accumulation at 1 and 12 day's antecedence respectively; these are normalised using the rainy-day normal and an equivalent measure for rain intensity. This research indicates that, in Scotland, rain accumulation provides a better indicator than rain intensity and that landslides may be generated by threshold conditions lower than previously thought. Second, a landslide susceptibility map is constructed using a cross-validated logistic regression model. A novel element of the approach is that landslide susceptibility is calculated for individual hillslope sections. The developed thresholds and susceptibility map are combined to assess potential hazards and impacts posed to the national highway network in Scotland.

  14. Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

    PubMed Central

    Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.

    2010-01-01

    We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451

  15. Digital mapping of soil properties in Zala County, Hungary for the support of county-level spatial planning and land management

    NASA Astrophysics Data System (ADS)

    Pásztor, László; Laborczi, Annamária; Szatmári, Gábor; Fodor, Nándor; Bakacsi, Zsófia; Szabó, József; Illés, Gábor

    2014-05-01

    The main objective of the DOSoReMI.hu (Digital, Optimized, Soil Related Maps and Information in Hungary) project is to significantly extend the potential, how demands on spatial soil related information could be satisfied in Hungary. Although a great amount of soil information is available due to former mappings and surveys, there are more and more frequently emerging discrepancies between the available and the expected data. The gaps are planned to be filled with optimized DSM products heavily based on legacy soil data, which still represent a valuable treasure of soil information at the present time. Impact assessment of the forecasted climate change and the analysis of the possibilities of the adaptation in the agriculture and forestry can be supported by scenario based land management modelling, whose results can be incorporated in spatial planning. This framework requires adequate, preferably timely and spatially detailed knowledge of the soil cover. For the satisfaction of these demands in Zala County (one of the nineteen counties of Hungary), the soil conditions of the agricultural areas were digitally mapped based on the most detailed, available recent and legacy soil data. The agri-environmental conditions were characterized according to the 1:10,000 scale genetic soil mapping methodology and the category system applied in the Hungarian soil-agricultural chemistry practice. The factors constraining the fertility of soils were featured according to the biophysical criteria system elaborated for the delimitation of naturally handicapped areas in the EU. Production related soil functions were regionalized incorporating agro-meteorological modelling. The appropriate derivatives of a 20m digital elevation model were used in the analysis. Multitemporal MODIS products were selected from the period of 2009-2011 representing different parts of the growing season and years with various climatic conditions. Additionally two climatic data layers, the 1:100.000 Geological Map of Hungary and the map of groundwater depth were used as auxiliary environmental covariables. Various soil related information were mapped in three distinct sets: (i) basic soil properties determining agri-environmental conditions (soil type according to the Hungarian genetic classification, rootable depth, sand and clay content for the 1st and 2nd soil layers, pH, OM and carbonate content for the plough layer); (ii) biophysical criteria of natural handicaps defined by common European system and (iii) agro-meteorologically modelled yield values for different crops, meteorological and management scenarios. The applied method(s) for the spatial inference of specific themes was/were suitably selected: regression and classification trees for categorical data, indicator kriging for probabilistic management of criterion information; and typically regression kriging for quantitative data. Our paper will present the mapping processes themselves, the resulted maps and some conclusions drawn from the experiences. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167) and by the European Union with the co-financing of the European Social Fund (TÁMOP-4.2.2.A-11/1/KONV-2012-0013.).

  16. Astronomical Data Integration Beyond the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Lemson, G.; Laurino, O.

    2015-09-01

    "Data integration" generally refers to the process of combining data from different source data bases into a unified view. Much work has been devoted in this area by the International Virtual Observatory Alliance (IVOA), allowing users to discover and access databases through standard protocols. However, different archives present their data through their own schemas and users must still select, filter, and combine data for each archive individually. An important reason for this is that the creation of common data models that satisfy all sub-disciplines is fraught with difficulties. Furthermore it requires a substantial amount of work for data providers to present their data according to some standard representation. We will argue that existing standards allow us to build a data integration framework that works around these problems. The particular framework requires the implementation of the IVOA Table Access Protocol (TAP) only. It uses the newly developed VO data modelling language (VO-DML) specification, which allows one to define extensible object-oriented data models using a subset of UML concepts through a simple XML serialization language. A rich mapping language allows one to describe how instances of VO-DML data models are represented by the TAP service, bridging the possible mismatch between a local archive's schema and some agreed-upon representation of the astronomical domain. In this so called local-as-view approach to data integration, “mediators" use the mapping prescriptions to translate queries phrased in terms of the common schema to the underlying TAP service. This mapping language has a graphical representation, which we expose through a web based graphical “drag-and-drop-and-connect" interface. This service allows any user to map the holdings of any TAP service to the data model(s) of choice. The mappings are defined and stored outside of the data sources themselves, which allows the interface to be used in a kind of crowd-sourcing effort to annotate any remote database of interest. This reduces the burden of publishing one's data and allows a great flexibility in the definition of the views through which particular communities might wish to access remote archives. At the same time, the framework easies the user's effort to select, filter, and combine data from many different archives, so as to build knowledge bases for their analysis. We will present the framework and demonstrate a prototype implementation. We will discuss ideas for producing the missing elements, in particular the query language and the implementation of mediator tools to translate object queries to ADQL

  17. Genome wide analysis of flowering time trait in multiple environments via high-throughput genotyping technique in Brassica napus L.

    PubMed

    Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng

    2015-01-01

    The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.

  18. Validating a two-high-threshold measurement model for confidence rating data in recognition.

    PubMed

    Bröder, Arndt; Kellen, David; Schütz, Julia; Rohrmeier, Constanze

    2013-01-01

    Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researcher's toolbox.

  19. Drawing out the Resistance Narrative via Mapping in "The Selected Works of T. S. Spivet"

    ERIC Educational Resources Information Center

    Hameed, Alya

    2017-01-01

    Though many children's texts include maps that visually demarcate their journeys, modern texts rarely involve active mapping by child characters themselves, suggesting that children cannot (or should not) conceptualise the world for themselves, but require an adult's guidance to traverse it. Reif Larsen's "The Selected Works of T. S.…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, D; Aryal, M; Samuels, S

    Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with further validation. The framework could be extendable and scalable to extract temporal contrastenhanced features of DCE-MRI in other tumors. We would like to acknowledge NIH for funding support: UO1 CA183848.« less

  1. Hydrologic record extension of water-level data in the Everglades Depth Estimation Network (EDEN), 1991-99

    USGS Publications Warehouse

    Conrads, Paul; Petkewich, Matthew D.; O'Reilly, Andrew M.; Telis, Pamela A.

    2015-01-01

    To hindcast and fill data records, 214 empirical models were developed—189 are linear regression models and 25 are artificial neural network models. The coefficient of determination (R2) for 163 of the models is greater than 0.80 and the median percent model error (root mean square error divided by the range of the measured data) is 5 percent. To evaluate the performance of the hindcast models as a group, contour maps of modeled water-level surfaces at 2-centimeter (cm) intervals were generated using the hindcasted data. The 2-cm contour maps were examined for selected days to verify that water surfaces from the EDEN model are consistent with the input data. The biweekly 2-cm contour maps did show a higher number of issues during days in 1990 as compared to days after 1990. May 1990 had the lowest water levels in the Everglades of the 21-year dataset used for the hindcasting study. To hindcast these record low conditions in 1990, many of the hindcast models would require large extrapolations beyond the range of the predictive quality of the models. For these reasons, it was decided to limit the hindcasted data to the period January 1, 1991, to December 31, 1999. Overall, the hindcasted and gap-filled data are assumed to provide reasonable estimates of station-specific water-level data for an extended historical period to inform research and natural resource management in the Everglades.

  2. Using Temporal Modulation Sensitivity to Select Stimulation Sites for Processor MAPs in Cochlear Implant Listeners

    PubMed Central

    Garadat, Soha N.; Zwolan, Teresa A.; Pfingst, Bryan E.

    2013-01-01

    Previous studies in our laboratory showed that temporal acuity as assessed by modulation detection thresholds (MDTs) varied across activation sites and that this site-to-site variability was subject specific. Using two 10-channel MAPs, the previous experiments showed that processor MAPs that had better across-site mean (ASM) MDTs yielded better speech recognition than MAPs with poorer ASM MDTs tested in the same subject. The current study extends our earlier work on developing more optimal fitting strategies to test the feasibility of using a site-selection approach in the clinical domain. This study examined the hypothesis that revising the clinical speech processor MAP for cochlear implant (CI) recipients by turning off selected sites that have poorer temporal acuity and reallocating frequencies to the remaining electrodes would lead to improved speech recognition. Twelve CI recipients participated in the experiments. We found that site selection procedure based on MDTs in the presence of a masker resulted in improved performance on consonant recognition and recognition of sentences in noise. In contrast, vowel recognition was poorer with the experimental MAP than with the clinical MAP, possibly due to reduced spectral resolution when sites were removed from the experimental MAP. Overall, these results suggest a promising path for improving recipient outcomes using personalized processor-fitting strategies based on a psychophysical measure of temporal acuity. PMID:23881208

  3. Environmental limitation mapping of potential biomass resources across the conterminous United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.

    Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less

  4. Environmental limitation mapping of potential biomass resources across the conterminous United States

    DOE PAGES

    Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.; ...

    2017-12-22

    Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less

  5. Hybrid incompatibility arises in a sequence-based bioenergetic model of transcription factor binding.

    PubMed

    Tulchinsky, Alexander Y; Johnson, Norman A; Watt, Ward B; Porter, Adam H

    2014-11-01

    Postzygotic isolation between incipient species results from the accumulation of incompatibilities that arise as a consequence of genetic divergence. When phenotypes are determined by regulatory interactions, hybrid incompatibility can evolve even as a consequence of parallel adaptation in parental populations because interacting genes can produce the same phenotype through incompatible allelic combinations. We explore the evolutionary conditions that promote and constrain hybrid incompatibility in regulatory networks using a bioenergetic model (combining thermodynamics and kinetics) of transcriptional regulation, considering the bioenergetic basis of molecular interactions between transcription factors (TFs) and their binding sites. The bioenergetic parameters consider the free energy of formation of the bond between the TF and its binding site and the availability of TFs in the intracellular environment. Together these determine fractional occupancy of the TF on the promoter site, the degree of subsequent gene expression and in diploids, and the degree of dominance among allelic interactions. This results in a sigmoid genotype-phenotype map and fitness landscape, with the details of the shape determining the degree of bioenergetic evolutionary constraint on hybrid incompatibility. Using individual-based simulations, we subjected two allopatric populations to parallel directional or stabilizing selection. Misregulation of hybrid gene expression occurred under either type of selection, although it evolved faster under directional selection. Under directional selection, the extent of hybrid incompatibility increased with the slope of the genotype-phenotype map near the derived parental expression level. Under stabilizing selection, hybrid incompatibility arose from compensatory mutations and was greater when the bioenergetic properties of the interaction caused the space of nearly neutral genotypes around the stable expression level to be wide. F2's showed higher hybrid incompatibility than F1's to the extent that the bioenergetic properties favored dominant regulatory interactions. The present model is a mechanistically explicit case of the Bateson-Dobzhansky-Muller model, connecting environmental selective pressure to hybrid incompatibility through the molecular mechanism of regulatory divergence. The bioenergetic parameters that determine expression represent measurable properties of transcriptional regulation, providing a predictive framework for empirical studies of how phenotypic evolution results in epistatic incompatibility at the molecular level in hybrids. Copyright © 2014 by the Genetics Society of America.

  6. Using perceptual rules in interactive visualization

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Treinish, Lloyd A.

    1994-05-01

    In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.

  7. Multilocus approaches for the measurement of selection on correlated genetic loci.

    PubMed

    Gompert, Zachariah; Egan, Scott P; Barrett, Rowan D H; Feder, Jeffrey L; Nosil, Patrik

    2017-01-01

    The study of ecological speciation is inherently linked to the study of selection. Methods for estimating phenotypic selection within a generation based on associations between trait values and fitness (e.g. survival) of individuals are established. These methods attempt to disentangle selection acting directly on a trait from indirect selection caused by correlations with other traits via multivariate statistical approaches (i.e. inference of selection gradients). The estimation of selection on genotypic or genomic variation could also benefit from disentangling direct and indirect selection on genetic loci. However, achieving this goal is difficult with genomic data because the number of potentially correlated genetic loci (p) is very large relative to the number of individuals sampled (n). In other words, the number of model parameters exceeds the number of observations (p ≫ n). We present simulations examining the utility of whole-genome regression approaches (i.e. Bayesian sparse linear mixed models) for quantifying direct selection in cases where p ≫ n. Such models have been used for genome-wide association mapping and are common in artificial breeding. Our results show they hold promise for studies of natural selection in the wild and thus of ecological speciation. But we also demonstrate important limitations to the approach and discuss study designs required for more robust inferences. © 2016 John Wiley & Sons Ltd.

  8. Universality in the Evolution of Orientation Columns in the Visual Cortex

    PubMed Central

    Kaschube, Matthias; Schnabel, Michael; Löwel, Siegrid; Coppola, David M.; White, Leonard E.; Wolf, Fred

    2011-01-01

    The brain’s visual cortex processes information concerning form, pattern, and motion within functional maps that reflect the layout of neuronal circuits. We analyzed functional maps of orientation preference in the ferret, tree shrew, and galago—three species separated since the basal radiation of placental mammals more than 65 million years ago—and found a common organizing principle. A symmetry-based class of models for the self-organization of cortical networks predicts all essential features of the layout of these neuronal circuits, but only if suppressive long-range interactions dominate development. We show mathematically that orientation-selective long-range connectivity can mediate the required interactions. Our results suggest that self-organization has canalized the evolution of the neuronal circuitry underlying orientation preference maps into a single common design. PMID:21051599

  9. ADME-Space: a new tool for medicinal chemists to explore ADME properties.

    PubMed

    Bocci, Giovanni; Carosati, Emanuele; Vayer, Philippe; Arrault, Alban; Lozano, Sylvain; Cruciani, Gabriele

    2017-07-25

    We introduce a new chemical space for drugs and drug-like molecules, exclusively based on their in silico ADME behaviour. This ADME-Space is based on self-organizing map (SOM) applied to 26,000 molecules. Twenty accurate QSPR models, describing important ADME properties, were developed and, successively, used as new molecular descriptors not related to molecular structure. Applications include permeability, active transport, metabolism and bioavailability studies, but the method can be even used to discuss drug-drug interactions (DDIs) or it can be extended to additional ADME properties. Thus, the ADME-Space opens a new framework for the multi-parametric data analysis in drug discovery where all ADME behaviours of molecules are condensed in one map: it allows medicinal chemists to simultaneously monitor several ADME properties, to rapidly select optimal ADME profiles, retrieve warning on potential ADME problems and DDIs or select proper in vitro experiments.

  10. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    NASA Astrophysics Data System (ADS)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.

  11. Spatial mapping and prediction of Plasmodium falciparum infection risk among school-aged children in Côte d'Ivoire.

    PubMed

    Houngbedji, Clarisse A; Chammartin, Frédérique; Yapi, Richard B; Hürlimann, Eveline; N'Dri, Prisca B; Silué, Kigbafori D; Soro, Gotianwa; Koudou, Benjamin G; Assi, Serge-Brice; N'Goran, Eliézer K; Fantodji, Agathe; Utzinger, Jürg; Vounatsou, Penelope; Raso, Giovanna

    2016-09-07

    In Côte d'Ivoire, malaria remains a major public health issue, and thus a priority to be tackled. The aim of this study was to identify spatially explicit indicators of Plasmodium falciparum infection among school-aged children and to undertake a model-based spatial prediction of P. falciparum infection risk using environmental predictors. A cross-sectional survey was conducted, including parasitological examinations and interviews with more than 5,000 children from 93 schools across Côte d'Ivoire. A finger-prick blood sample was obtained from each child to determine Plasmodium species-specific infection and parasitaemia using Giemsa-stained thick and thin blood films. Household socioeconomic status was assessed through asset ownership and household characteristics. Children were interviewed for preventive measures against malaria. Environmental data were gathered from satellite images and digitized maps. A Bayesian geostatistical stochastic search variable selection procedure was employed to identify factors related to P. falciparum infection risk. Bayesian geostatistical logistic regression models were used to map the spatial distribution of P. falciparum infection and to predict the infection prevalence at non-sampled locations via Bayesian kriging. Complete data sets were available from 5,322 children aged 5-16 years across Côte d'Ivoire. P. falciparum was the predominant species (94.5 %). The Bayesian geostatistical variable selection procedure identified land cover and socioeconomic status as important predictors for infection risk with P. falciparum. Model-based prediction identified high P. falciparum infection risk in the north, central-east, south-east, west and south-west of Côte d'Ivoire. Low-risk areas were found in the south-eastern area close to Abidjan and the south-central and west-central part of the country. The P. falciparum infection risk and related uncertainty estimates for school-aged children in Côte d'Ivoire represent the most up-to-date malaria risk maps. These tools can be used for spatial targeting of malaria control interventions.

  12. Using a detailed uncertainty analysis to adjust mapped rates of forest disturbance derived from Landsat time series data (Invited)

    NASA Astrophysics Data System (ADS)

    Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.

    2013-12-01

    Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.

  13. Segmentation of singularity maps in the context of soil porosity

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).

  14. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  15. MIP-MAP: High-Throughput Mapping of Caenorhabditis elegans Temperature-Sensitive Mutants via Molecular Inversion Probes.

    PubMed

    Mok, Calvin A; Au, Vinci; Thompson, Owen A; Edgley, Mark L; Gevirtzman, Louis; Yochem, John; Lowry, Joshua; Memar, Nadin; Wallenfang, Matthew R; Rasoloson, Dominique; Bowerman, Bruce; Schnabel, Ralf; Seydoux, Geraldine; Moerman, Donald G; Waterston, Robert H

    2017-10-01

    Mutants remain a powerful means for dissecting gene function in model organisms such as Caenorhabditis elegans Massively parallel sequencing has simplified the detection of variants after mutagenesis but determining precisely which change is responsible for phenotypic perturbation remains a key step. Genetic mapping paradigms in C . elegans rely on bulk segregant populations produced by crosses with the problematic Hawaiian wild isolate and an excess of redundant information from whole-genome sequencing (WGS). To increase the repertoire of available mutants and to simplify identification of the causal change, we performed WGS on 173 temperature-sensitive (TS) lethal mutants and devised a novel mapping method. The mapping method uses molecular inversion probes (MIP-MAP) in a targeted sequencing approach to genetic mapping, and replaces the Hawaiian strain with a Million Mutation Project strain with high genomic and phenotypic similarity to the laboratory wild-type strain N2 We validated MIP-MAP on a subset of the TS mutants using a competitive selection approach to produce TS candidate mapping intervals with a mean size < 3 Mb. MIP-MAP successfully uses a non-Hawaiian mapping strain and multiplexed libraries are sequenced at a fraction of the cost of WGS mapping approaches. Our mapping results suggest that the collection of TS mutants contains a diverse library of TS alleles for genes essential to development and reproduction. MIP-MAP is a robust method to genetically map mutations in both viable and essential genes and should be adaptable to other organisms. It may also simplify tracking of individual genotypes within population mixtures. Copyright © 2017 by the Genetics Society of America.

  16. MIP-MAP: High-Throughput Mapping of Caenorhabditis elegans Temperature-Sensitive Mutants via Molecular Inversion Probes

    PubMed Central

    Mok, Calvin A.; Au, Vinci; Thompson, Owen A.; Edgley, Mark L.; Gevirtzman, Louis; Yochem, John; Lowry, Joshua; Memar, Nadin; Wallenfang, Matthew R.; Rasoloson, Dominique; Bowerman, Bruce; Schnabel, Ralf; Seydoux, Geraldine; Moerman, Donald G.; Waterston, Robert H.

    2017-01-01

    Mutants remain a powerful means for dissecting gene function in model organisms such as Caenorhabditis elegans. Massively parallel sequencing has simplified the detection of variants after mutagenesis but determining precisely which change is responsible for phenotypic perturbation remains a key step. Genetic mapping paradigms in C. elegans rely on bulk segregant populations produced by crosses with the problematic Hawaiian wild isolate and an excess of redundant information from whole-genome sequencing (WGS). To increase the repertoire of available mutants and to simplify identification of the causal change, we performed WGS on 173 temperature-sensitive (TS) lethal mutants and devised a novel mapping method. The mapping method uses molecular inversion probes (MIP-MAP) in a targeted sequencing approach to genetic mapping, and replaces the Hawaiian strain with a Million Mutation Project strain with high genomic and phenotypic similarity to the laboratory wild-type strain N2. We validated MIP-MAP on a subset of the TS mutants using a competitive selection approach to produce TS candidate mapping intervals with a mean size < 3 Mb. MIP-MAP successfully uses a non-Hawaiian mapping strain and multiplexed libraries are sequenced at a fraction of the cost of WGS mapping approaches. Our mapping results suggest that the collection of TS mutants contains a diverse library of TS alleles for genes essential to development and reproduction. MIP-MAP is a robust method to genetically map mutations in both viable and essential genes and should be adaptable to other organisms. It may also simplify tracking of individual genotypes within population mixtures. PMID:28827289

  17. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    PubMed Central

    Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513

  18. A large sample of shear-selected clusters from the Hyper Suprime-Cam Subaru Strategic Program S16A Wide field mass maps

    NASA Astrophysics Data System (ADS)

    Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Shirasaki, Masato; Koike, Michitaro; Komiyama, Yutaka; Umetsu, Keiichi; Utsumi, Yousuke; Okabe, Nobuhiro; More, Surhud; Medezinski, Elinor; Lin, Yen-Ting; Miyatake, Hironao; Murayama, Hitoshi; Ota, Naomi; Mitsuishi, Ikuyuki

    2018-01-01

    We present the result of searching for clusters of galaxies based on weak gravitational lensing analysis of the ˜160 deg2 area surveyed by Hyper Suprime-Cam (HSC) as a Subaru Strategic Program. HSC is a new prime focus optical imager with a 1.5°-diameter field of view on the 8.2 m Subaru telescope. The superb median seeing on the HSC i-band images of 0.56" allows the reconstruction of high angular resolution mass maps via weak lensing, which is crucial for the weak lensing cluster search. We identify 65 mass map peaks with a signal-to-noise (S/N) ratio larger than 4.7, and carefully examine their properties by cross-matching the clusters with optical and X-ray cluster catalogs. We find that all the 39 peaks with S/N > 5.1 have counterparts in the optical cluster catalogs, and only 2 out of the 65 peaks are probably false positives. The upper limits of X-ray luminosities from the ROSAT All Sky Survey (RASS) imply the existence of an X-ray underluminous cluster population. We show that the X-rays from the shear-selected clusters can be statistically detected by stacking the RASS images. The inferred average X-ray luminosity is about half that of the X-ray-selected clusters of the same mass. The radial profile of the dark matter distribution derived from the stacking analysis is well modeled by the Navarro-Frenk-White profile with a small concentration parameter value of c500 ˜ 2.5, which suggests that the selection bias on the orientation or the internal structure for our shear-selected cluster sample is not strong.

  19. Mapping Disturbance Dynamics in Wet Sclerophyll Forests Using Time Series Landsat

    NASA Astrophysics Data System (ADS)

    Haywood, A.; Verbesselt, J.; Baker, P. J.

    2016-06-01

    In this study, we characterised the temporal-spectral patterns associated with identifying acute-severity disturbances and low-severity disturbances between 1985 and 2011 with the objective to test whether different disturbance agents within these categories can be identified with annual Landsat time series data. We analysed a representative State forest within the Central Highlands which has been exposed to a range of disturbances over the last 30 years, including timber harvesting (clearfell, selective and thinning) and fire (wildfire and prescribed burning). We fitted spectral time series models to annual normal burn ratio (NBR) and Tasseled Cap Indices (TCI), from which we extracted a range of disturbance and recovery metrics. With these metrics, three hierarchical random forest models were trained to 1) distinguish acute-severity disturbances from low-severity disturbances; 2a) attribute the disturbance agents most likely within the acute-severity class; 2b) and attribute the disturbance agents most likely within the low-severity class. Disturbance types (acute severity and low-severity) were successfully mapped with an overall accuracy of 72.9 %, and the individual disturbance types were successfully attributed with overall accuracies ranging from 53.2 % to 64.3 %. Low-severity disturbance agents were successfully mapped with an overall accuracy of 80.2 %, and individual agents were successfully attributed with overall accuracies ranging from 25.5 % to 95.1. Acute-severity disturbance agents were successfully mapped with an overall accuracy of 95.4 %, and individual agents were successfully attributed with overall accuracies ranging from 94.2 % to 95.2 %. Spectral metrics describing the disturbance magnitude were more important for distinguishing the disturbance agents than the post-disturbance response slope. Spectral changes associated with planned burning disturbances had generally lower magnitudes than selective harvesting. This study demonstrates the potential of landsat time series mapping for fire and timber harvesting disturbances at the agent level and highlights the need for distinguishing between agents to fully capture their impacts on ecosystem processes.

  20. How well are malaria maps used to design and finance malaria control in Africa?

    PubMed

    Omumbo, Judy A; Noor, Abdisalan M; Fall, Ibrahima S; Snow, Robert W

    2013-01-01

    Rational decision making on malaria control depends on an understanding of the epidemiological risks and control measures. National Malaria Control Programmes across Africa have access to a range of state-of-the-art malaria risk mapping products that might serve their decision-making needs. The use of cartography in planning malaria control has never been methodically reviewed. An audit of the risk maps used by NMCPs in 47 malaria endemic countries in Africa was undertaken by examining the most recent national malaria strategies, monitoring and evaluation plans, malaria programme reviews and applications submitted to the Global Fund. The types of maps presented and how they have been used to define priorities for investment and control was investigated. 91% of endemic countries in Africa have defined malaria risk at sub-national levels using at least one risk map. The range of risk maps varies from maps based on suitability of climate for transmission; predicted malaria seasons and temperature/altitude limitations, to representations of clinical data and modelled parasite prevalence. The choice of maps is influenced by the source of the information. Maps developed using national data through in-country research partnerships have greater utility than more readily accessible web-based options developed without inputs from national control programmes. Although almost all countries have stratification maps, only a few use them to guide decisions on the selection of interventions allocation of resources for malaria control. The way information on the epidemiology of malaria is presented and used needs to be addressed to ensure evidence-based added value in planning control. The science on modelled impact of interventions must be integrated into new mapping products to allow a translation of risk into rational decision making for malaria control. As overseas and domestic funding diminishes, strategic planning will be necessary to guide appropriate financing for malaria control.

  1. Improving ontology matching with propagation strategy and user feedback

    NASA Astrophysics Data System (ADS)

    Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu

    2015-07-01

    Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  2. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  3. IsoMAP (Isoscape Modeling, Analysis, and Prediction)

    NASA Astrophysics Data System (ADS)

    Miller, C. C.; Bowen, G. J.; Zhang, T.; Zhao, L.; West, J. B.; Liu, Z.; Rapolu, N.

    2009-12-01

    IsoMAP is a TeraGrid-based web portal aimed at building the infrastructure that brings together distributed multi-scale and multi-format geospatial datasets to enable statistical analysis and modeling of environmental isotopes. A typical workflow enabled by the portal includes (1) data source exploration and selection, (2) statistical analysis and model development; (3) predictive simulation of isotope distributions using models developed in (1) and (2); (4) analysis and interpretation of simulated spatial isotope distributions (e.g., comparison with independent observations, pattern analysis). The gridded models and data products created by one user can be shared and reused among users within the portal, enabling collaboration and knowledge transfer. This infrastructure and the research it fosters can lead to fundamental changes in our knowledge of the water cycle and ecological and biogeochemical processes through analysis of network-based isotope data, but it will be important A) that those with whom the data and models are shared can be sure of the origin, quality, inputs, and processing history of these products, and B) the system is agile and intuitive enough to facilitate this sharing (rather than just ‘allow’ it). IsoMAP researchers are therefore building into the portal’s architecture several components meant to increase the amount of metadata about users’ products and to repurpose those metadata to make sharing and discovery more intuitive and robust to both expected, professional users as well as unforeseeable populations from other sectors.

  4. Shape selection in Landsat time series: A tool for monitoring forest dynamics

    Treesearch

    Gretchen G. Moisen; Mary C. Meyer; Todd A. Schroeder; Xiyue Liao; Karen G. Schleeweis; Elizabeth A. Freeman; Chris Toney

    2016-01-01

    We present a new methodology for fitting nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral band or index of choice in temporal Landsat data, our method delivers a smoothed rendition of...

  5. Mapping a Domain Model and Architecture to a Generic Design

    DTIC Science & Technology

    1994-05-01

    Code 103 Aooession Fol [TIS GRA&I Or DTIC TAB 0 Uaannouneed 0 hTst ifI catlon DIstributiqoj.- A Availability 0Qede Avail and,(o viaa~ CMLVSBI44-TF#4...this step is a record (in no specific form) of the selected features. This record (list, highlighted features diagram, or other media ) is used

  6. PresenceAbsence: An R package for presence absence analysis

    Treesearch

    Elizabeth A. Freeman; Gretchen Moisen

    2008-01-01

    The PresenceAbsence package for R provides a set of functions useful when evaluating the results of presence-absence analysis, for example, models of species distribution or the analysis of diagnostic tests. The package provides a toolkit for selecting the optimal threshold for translating a probability surface into presence-absence maps specifically tailored to their...

  7. SU-G-IeP1-12: Size Selective Arterial Cerebral Blood Volume Mapping Using Multiple Inversion Time Arterial Spin Labeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Y; Johnston, M; Whitlow, C

    Purpose: To demonstrate the feasibility of a novel method for size specific arterial cerebral blood volume (aCBV) mapping using pseudo-continuous arterial spin labeling (PCASL), with multiple TI. Methods: Multiple PCASL images were obtained from a subject with TI of [300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3500, 4000] ms. Each TI pair was averaged six times. Two scans were performed: one without a flow crusher gradient and the other with a crusher gradient (10cm/s in three directions) to remove signals from large arteries. Scan times were 5min. without a crusher gradient and 5.5 min withmore » a crusher gradient. Non-linear fitting algorithm finds the minimum mean squared solution of per-voxel based aCBV, cerebral blood flow, and arterial transit time, and fits the data into a hemodynamic model that represents superposition of blood volume and flow components within a single voxel. Results: aCBV maps with a crusher gradient represent signals from medium and small sized arteries, while those without a crusher gradient represent signals from all sized arteries, indicating that flow crusher gradients can be effectively employed to achieve size-specific aCBV mapping. Regardless of flow crusher, the CBF and ATT maps are very similar in appearance. Conclusion: Quantitative size selective blood volume mapping controlled by a flow crusher is feasible without additional information because the ASL quantification process doesn’t require an arterial input function measured from a large artery. The size specific blood volume mapping is not interfered by sSignals from large arteries do not interfere with size specific aCBV mapping in the applications of interest in for applications in which only medium or small arteries are of interest.« less

  8. ANFIS modeling for the assessment of landslide susceptibility for the Cameron Highland (Malaysia)

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Sezer, Ebru; Gokceoglu, Candan; Buchroithner, Manfred F.

    2010-05-01

    Landslides are one of the recurrent natural hazard problems throughout most of Malaysia. In landslide literature, there are several approaches such as probabilistic, bivariate and multivariate statistical models, fuzzy and artificial neural network models etc. However, a neuro-fuzzy application on the landslide susceptibility assessment has not been encountered in the literature. For this reason, this study presents the results of an adaptive neuro-fuzzy inference system (ANFIS) using remote sensing data and GIS for landslide susceptibility analysis in a part of the Cameron Highland areas in Malaysia. Landslide locations in the study area were identified by interpreting aerial photographs and satellite images, supported by extensive field surveys. Landsat TM satellite imagery was used to map vegetation index. Maps of topography, lineaments, NDVI and land cover were constructed from the spatial datasets. Seven landslide conditioning factors such as altitude, slope angle, curvature, distance from drainage, lithology, distance from faults and NDVI were extracted from the spatial database. These factors were analyzed using an ANFIS to produce the landslide susceptibility maps. During the model development works, total 5 landslide susceptibility models were constructed. For verification, the results of the analyses were then compared with the field-verified landslide locations. Additionally, the ROC curves for all landslide susceptibility models were drawn and the area under curve values were calculated. Landslide locations were used to validate results of the landslide susceptibility map and the verification results showed 97% accuracy for the model 5 employing all parameters produced in the present study as the landslide conditioning factors. The validation results showed sufficient agreement between the obtained susceptibility map and the existing data on landslide areas. Qualitatively, the model yields reasonable results which can be used for preliminary land-use planning purposes. As a final conclusion, the results revealed that the ANFIS modeling is a very useful and powerful tool for the regional landslide susceptibility assessments. However, the results to be obtained from the ANFIS modeling should be assessed carefully because the overlearning may cause misleading results. To prevent overlerning, the numbers of membership functions of inputs and the number of training epochs should be selected optimally and carefully.

  9. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation

    PubMed Central

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K.

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348

  10. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    PubMed

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  11. Mars-GRAM: Increasing the Precision of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. A comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, density factor values were determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with TES observations for MapYears 1 and 2 at comparable dust loading. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths.

  12. Spatial prediction of Plasmodium falciparum prevalence in Somalia

    PubMed Central

    Noor, Abdisalan M; Clements, Archie CA; Gething, Peter W; Moloney, Grainne; Borle, Mohammed; Shewchuk, Tanya; Hay, Simon I; Snow, Robert W

    2008-01-01

    Background Maps of malaria distribution are vital for optimal allocation of resources for anti-malarial activities. There is a lack of reliable contemporary malaria maps in endemic countries in sub-Saharan Africa. This problem is particularly acute in low malaria transmission countries such as those located in the horn of Africa. Methods Data from a national malaria cluster sample survey in 2005 and routine cluster surveys in 2007 were assembled for Somalia. Rapid diagnostic tests were used to examine the presence of Plasmodium falciparum parasites in finger-prick blood samples obtained from individuals across all age-groups. Bayesian geostatistical models, with environmental and survey covariates, were used to predict continuous maps of malaria prevalence across Somalia and to define the uncertainty associated with the predictions. Results For analyses the country was divided into north and south. In the north, the month of survey, distance to water, precipitation and temperature had no significant association with P. falciparum prevalence when spatial correlation was taken into account. In contrast, all the covariates, except distance to water, were significantly associated with parasite prevalence in the south. The inclusion of covariates improved model fit for the south but not for the north. Model precision was highest in the south. The majority of the country had a predicted prevalence of < 5%; areas with ≥ 5% prevalence were predominantly in the south. Conclusion The maps showed that malaria transmission in Somalia varied from hypo- to meso-endemic. However, even after including the selected covariates in the model, there still remained a considerable amount of unexplained spatial variation in parasite prevalence, indicating effects of other factors not captured in the study. Nonetheless the maps presented here provide the best contemporary information on malaria prevalence in Somalia. PMID:18717998

  13. Spatial prediction of Plasmodium falciparum prevalence in Somalia.

    PubMed

    Noor, Abdisalan M; Clements, Archie C A; Gething, Peter W; Moloney, Grainne; Borle, Mohammed; Shewchuk, Tanya; Hay, Simon I; Snow, Robert W

    2008-08-21

    Maps of malaria distribution are vital for optimal allocation of resources for anti-malarial activities. There is a lack of reliable contemporary malaria maps in endemic countries in sub-Saharan Africa. This problem is particularly acute in low malaria transmission countries such as those located in the horn of Africa. Data from a national malaria cluster sample survey in 2005 and routine cluster surveys in 2007 were assembled for Somalia. Rapid diagnostic tests were used to examine the presence of Plasmodium falciparum parasites in finger-prick blood samples obtained from individuals across all age-groups. Bayesian geostatistical models, with environmental and survey covariates, were used to predict continuous maps of malaria prevalence across Somalia and to define the uncertainty associated with the predictions. For analyses the country was divided into north and south. In the north, the month of survey, distance to water, precipitation and temperature had no significant association with P. falciparum prevalence when spatial correlation was taken into account. In contrast, all the covariates, except distance to water, were significantly associated with parasite prevalence in the south. The inclusion of covariates improved model fit for the south but not for the north. Model precision was highest in the south. The majority of the country had a predicted prevalence of < 5%; areas with > or = 5% prevalence were predominantly in the south. The maps showed that malaria transmission in Somalia varied from hypo- to meso-endemic. However, even after including the selected covariates in the model, there still remained a considerable amount of unexplained spatial variation in parasite prevalence, indicating effects of other factors not captured in the study. Nonetheless the maps presented here provide the best contemporary information on malaria prevalence in Somalia.

  14. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  15. New leads for selective GSK-3 inhibition: pharmacophore mapping and virtual screening studies.

    PubMed

    Patel, Dhilon S; Bharatam, Prasad V

    2006-01-01

    Glycogen Synthase Kinase-3 is a regulatory serine/threonine kinase, which is being targeted for the treatment of a number of human diseases including type-2 diabetes mellitus, neurodegenerative diseases, cancer and chronic inflammation. Selective GSK-3 inhibition is an important requirement owing to the possibility of side effects arising from other kinases. A pharmacophore mapping strategy is employed in this work to identify new leads for selective GSK-3 inhibition. Ligands known to show selective GSK-3 inhibition were employed in generating a pharmacophore map using distance comparison method (DISCO). The derived pharmacophore map was validated using (i) important interactions involved in selective GSK-3 inhibitions, and (ii) an in-house database containing different classes of GSK-3 selective, non-selective and inactive molecules. New Lead identification was carried out by performing virtual screening using validated pharmacophoric query and three chemical databases namely NCI, Maybridge and Leadquest. Further data reduction was carried out by employing virtual filters based on (i) Lipinski's rule of 5 (ii) van der Waals bumps and (iii) restricting the number of rotatable bonds to seven. Final screening was carried out using FlexX based molecular docking study.

  16. New leads for selective GSK-3 inhibition: pharmacophore mapping and virtual screening studies

    NASA Astrophysics Data System (ADS)

    Patel, Dhilon S.; Bharatam, Prasad V.

    2006-01-01

    Glycogen Synthase Kinase-3 is a regulatory serine/threonine kinase, which is being targeted for the treatment of a number of human diseases including type-2 diabetes mellitus, neurodegenerative diseases, cancer and chronic inflammation. Selective GSK-3 inhibition is an important requirement owing to the possibility of side effects arising from other kinases. A pharmacophore mapping strategy is employed in this work to identify new leads for selective GSK-3 inhibition. Ligands known to show selective GSK-3 inhibition were employed in generating a pharmacophore map using distance comparison method (DISCO). The derived pharmacophore map was validated using (i) important interactions involved in selective GSK-3 inhibitions, and (ii) an in-house database containing different classes of GSK-3 selective, non-selective and inactive molecules. New Lead identification was carried out by performing virtual screening using validated pharmacophoric query and three chemical databases namely NCI, Maybridge and Leadquest. Further data reduction was carried out by employing virtual filters based on (i) Lipinski's rule of 5 (ii) van der Waals bumps and (iii) restricting the number of rotatable bonds to seven. Final screening was carried out using FlexX based molecular docking study.

  17. Chaotic Dynamics of Linguistic-Like Processes at the Syntactical and Semantic Levels: in the Pursuit of a Multifractal Attractor

    NASA Astrophysics Data System (ADS)

    Nicolis, John S.; Katsikas, Anastassis A.

    Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.

  18. Pixel-based flood mapping from SAR imagery: a comparison of approaches

    NASA Astrophysics Data System (ADS)

    Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.

    2017-04-01

    Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.

  19. Development of flood profiles and flood-inundation maps for the Village of Killbuck, Ohio

    USGS Publications Warehouse

    Ostheimer, Chad J.

    2013-01-01

    Digital flood-inundation maps for a reach of Killbuck Creek near the Village of Killbuck, Ohio, were created by the U.S. Geological Survey (USGS), in cooperation with Holmes County, Ohio. The inundation maps depict estimates of the areal extent of flooding corresponding to water levels (stages) at the USGS streamgage Killbuck Creek near Killbuck (03139000) and were completed as part of an update to Federal Emergency Management Agency Flood-Insurance Study. The maps were provided to the National Weather Service (NWS) for incorporation into a Web-based flood-warning system that can be used in conjunction with NWS flood-forecast data to show areas of predicted flood inundation associated with forecasted flood-peak stages. The digital maps also have been submitted for inclusion in the data libraries of the USGS interactive Flood Inundation Mapper. Data from the streamgage can be used by emergency-management personnel, in conjunction with the flood-inundation maps, to help determine a course of action when flooding is imminent. Flood profiles for selected reaches were prepared by calibrating a steady-state step-backwater model to an established streamgage rating curve. The step-backwater model then was used to determine water-surface-elevation profiles for 10 flood stages at the streamgage with corresponding streamflows ranging from approximately the 50- to 0.2-percent annual exceedance probabilities. The computed flood profiles were used in combination with digital elevation data to delineate flood-inundation areas.

  20. Large-Scale, High-Resolution Neurophysiological Maps Underlying fMRI of Macaque Temporal Lobe

    PubMed Central

    Papanastassiou, Alex M.; DiCarlo, James J.

    2013-01-01

    Maps obtained by functional magnetic resonance imaging (fMRI) are thought to reflect the underlying spatial layout of neural activity. However, previous studies have not been able to directly compare fMRI maps to high-resolution neurophysiological maps, particularly in higher level visual areas. Here, we used a novel stereo microfocal x-ray system to localize thousands of neural recordings across monkey inferior temporal cortex (IT), construct large-scale maps of neuronal object selectivity at subvoxel resolution, and compare those neurophysiology maps with fMRI maps from the same subjects. While neurophysiology maps contained reliable structure at the sub-millimeter scale, fMRI maps of object selectivity contained information at larger scales (>2.5 mm) and were only partly correlated with raw neurophysiology maps collected in the same subjects. However, spatial smoothing of neurophysiology maps more than doubled that correlation, while a variety of alternative transforms led to no significant improvement. Furthermore, raw spiking signals, once spatially smoothed, were as predictive of fMRI maps as local field potential signals. Thus, fMRI of the inferior temporal lobe reflects a spatially low-passed version of neurophysiology signals. These findings strongly validate the widespread use of fMRI for detecting large (>2.5 mm) neuronal domains of object selectivity but show that a complete understanding of even the most pure domains (e.g., faces vs nonface objects) requires investigation at fine scales that can currently only be obtained with invasive neurophysiological methods. PMID:24048850

  1. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  2. Weights of Evidence Method for Landslide Susceptibility Mapping in Takengon, Central Aceh, Indonesia

    NASA Astrophysics Data System (ADS)

    Pamela; Sadisun, Imam A.; Arifianti, Yukni

    2018-02-01

    Takengon is an area prone to earthquake disaster and landslide. On July 2, 2013, Central Aceh earthquake induced large numbers of landslides in Takengon area, which resulted in casualties of 39 people. This location was chosen to assess the landslide susceptibility of Takengon, using a statistical method, referred to as the weight of evidence (WoE). This WoE model was applied to indicate the main factors influencing the landslide susceptible area and to derive landslide susceptibility map of Takengon. The 251 landslides randomly divided into two groups of modeling/training data (70%) and validation/test data sets (30%). Twelve thematic maps of evidence are slope degree, slope aspect, lithology, land cover, elevation, rainfall, lineament, peak ground acceleration, curvature, flow direction, distance to river and roads used as landslide causative factors. According to the AUC, the significant factor controlling the landslide is the slope, the slope aspect, peak ground acceleration, elevation, lithology, flow direction, lineament, and rainfall respectively. Analytical result verified by using test data of landslide shows AUC prediction rate is 0.819 and AUC success rate with all landslide data included is 0.879. This result showed the selective factors and WoE method as good models for assessing landslide susceptibility. The landslide susceptibility map of Takengon shows the probabilities, which represent relative degrees of susceptibility for landslide proneness in Takengon area.

  3. Function-specific and Enhanced Brain Structural Connectivity Mapping via Joint Modeling of Diffusion and Functional MRI.

    PubMed

    Chu, Shu-Hsien; Parhi, Keshab K; Lenglet, Christophe

    2018-03-16

    A joint structural-functional brain network model is presented, which enables the discovery of function-specific brain circuits, and recovers structural connections that are under-estimated by diffusion MRI (dMRI). Incorporating information from functional MRI (fMRI) into diffusion MRI to estimate brain circuits is a challenging task. Usually, seed regions for tractography are selected from fMRI activation maps to extract the white matter pathways of interest. The proposed method jointly analyzes whole brain dMRI and fMRI data, allowing the estimation of complete function-specific structural networks instead of interactively investigating the connectivity of individual cortical/sub-cortical areas. Additionally, tractography techniques are prone to limitations, which can result in erroneous pathways. The proposed framework explicitly models the interactions between structural and functional connectivity measures thereby improving anatomical circuit estimation. Results on Human Connectome Project (HCP) data demonstrate the benefits of the approach by successfully identifying function-specific anatomical circuits, such as the language and resting-state networks. In contrast to correlation-based or independent component analysis (ICA) functional connectivity mapping, detailed anatomical connectivity patterns are revealed for each functional module. Results on a phantom (Fibercup) also indicate improvements in structural connectivity mapping by rejecting false-positive connections with insufficient support from fMRI, and enhancing under-estimated connectivity with strong functional correlation.

  4. Locating Sequence on FPC Maps and Selecting a Minimal Tiling Path

    PubMed Central

    Engler, Friedrich W.; Hatfield, James; Nelson, William; Soderlund, Carol A.

    2003-01-01

    This study discusses three software tools, the first two aid in integrating sequence with an FPC physical map and the third automatically selects a minimal tiling path given genomic draft sequence and BAC end sequences. The first tool, FSD (FPC Simulated Digest), takes a sequenced clone and adds it back to the map based on a fingerprint generated by an in silico digest of the clone. This allows verification of sequenced clone positions and the integration of sequenced clones that were not originally part of the FPC map. The second tool, BSS (Blast Some Sequence), takes a query sequence and positions it on the map based on sequence associated with the clones in the map. BSS has multiple uses as follows: (1) When the query is a file of marker sequences, they can be added as electronic markers. (2) When the query is draft sequence, the results of BSS can be used to close gaps in a sequenced clone or the physical map. (3) When the query is a sequenced clone and the target is BAC end sequences, one may select the next clone for sequencing using both sequence comparison results and map location. (4) When the query is whole-genome draft sequence and the target is BAC end sequences, the results can be used to select many clones for a minimal tiling path at once. The third tool, pickMTP, automates the majority of this last usage of BSS. Results are presented using the rice FPC map, BAC end sequences, and whole-genome shotgun from Syngenta. PMID:12915486

  5. Laharz_py: GIS tools for automated mapping of lahar inundation hazard zones

    USGS Publications Warehouse

    Schilling, Steve P.

    2014-01-01

    Laharz_py is written in the Python programming language as a suite of tools for use in ArcMap Geographic Information System (GIS). Primarily, Laharz_py is a computational model that uses statistical descriptions of areas inundated by past mass-flow events to forecast areas likely to be inundated by hypothetical future events. The forecasts use physically motivated and statistically calibrated power-law equations that each has a form A = cV2/3, relating mass-flow volume (V) to planimetric or cross-sectional areas (A) inundated by an average flow as it descends a given drainage. Calibration of the equations utilizes logarithmic transformation and linear regression to determine the best-fit values of c. The software uses values of V, an algorithm for idenitifying mass-flow source locations, and digital elevation models of topography to portray forecast hazard zones for lahars, debris flows, or rock avalanches on maps. Laharz_py offers two methods to construct areas of potential inundation for lahars: (1) Selection of a range of plausible V values results in a set of nested hazard zones showing areas likely to be inundated by a range of hypothetical flows; and (2) The user selects a single volume and a confidence interval for the prediction. In either case, Laharz_py calculates the mean expected A and B value from each user-selected value of V. However, for the second case, a single value of V yields two additional results representing the upper and lower values of the confidence interval of prediction. Calculation of these two bounding predictions require the statistically calibrated prediction equations, a user-specified level of confidence, and t-distribution statistics to calculate the standard error of regression, standard error of the mean, and standard error of prediction. The portrayal of results from these two methods on maps compares the range of inundation areas due to prediction uncertainties with uncertainties in selection of V values. The Open-File Report document contains an explanation of how to install and use the software. The Laharz_py software includes an example data set for Mount Rainier, Washington. The second part of the documentation describes how to use all of the Laharz_py tools in an example dataset at Mount Rainier, Washington.

  6. Existence of Lipschitz selections of the Steiner map

    NASA Astrophysics Data System (ADS)

    Bednov, B. B.; Borodin, P. A.; Chesnokova, K. V.

    2018-02-01

    This paper is concerned with the problem of the existence of Lipschitz selections of the Steiner map {St}_n, which associates with n points of a Banach space X the set of their Steiner points. The answer to this problem depends on the geometric properties of the unit sphere S(X) of X, its dimension, and the number n. For n≥slant 4 general conditions are obtained on the space X under which {St}_n admits no Lipschitz selection. When X is finite dimensional it is shown that, if n≥slant 4 is even, the map {St}_n has a Lipschitz selection if and only if S(X) is a finite polytope; this is not true if n≥slant 3 is odd. For n=3 the (single-valued) map {St}_3 is shown to be Lipschitz continuous in any smooth strictly-convex two-dimensional space; this ceases to be true in three-dimensional spaces. Bibliography: 21 titles.

  7. Velocity selection in coupled-map lattices

    NASA Astrophysics Data System (ADS)

    Parekh, Nita; Puri, Sanjay

    1993-02-01

    We investigate the phenomenon of velocity selection for traveling wave fronts in a class of coupled-map lattices, derived by discretizations of the Fisher equation [Ann. Eugenics 7, 355 (1937)]. We find that the velocity selection can be understood in terms of a discrete analog of the marginal-stability hypothesis. A perturbative approach also enables us to estimate the selected velocity accurately for small values of the discretization mesh sizes.

  8. Automated method to differentiate between native and mirror protein models obtained from contact maps.

    PubMed

    Kurczynska, Monika; Kotulska, Malgorzata

    2018-01-01

    Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters-the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models.

  9. Automated method to differentiate between native and mirror protein models obtained from contact maps

    PubMed Central

    Kurczynska, Monika

    2018-01-01

    Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters–the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models. PMID:29787567

  10. Utilizing Mars Global Reference Atmospheric Model (Mars-GRAM 2005) to Evaluate Entry Probe Mission Sites

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.

    2008-01-01

    Engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL)1. Traditional Mars-GRAM options for representing the mean atmosphere along entry corridors include: a) TES Mapping Years 1 and 2, with Mars-GRAM data coming from MGCM model results driven by observed TES dust optical depth; and b) TES Mapping Year 0, with user-controlled dust optical depth and Mars-GRAM data interpolated from MGCM model results driven by selected values of globally-uniform dust optical depth. From the surface to 80 km altitude, Mars-GRAM is based on NASA Ames Mars General Circulation Model (MGCM). Mars-GRAM and MGCM use surface topography from Mars Global Surveyor Mars Orbiter Laser Altimeter (MOLA), with altitudes referenced to the MOLA areoid, or constant potential surface. Mars-GRAM 2005 has been validated2 against Radio Science data, and both nadir and limb data from the Thermal Emission Spectrometer (TES)

  11. Gravity and magnetic anomaly modeling and correlation using the SPHERE program and Magsat data

    NASA Technical Reports Server (NTRS)

    Braile, L. W.; Hinze, W. J. (Principal Investigator); Vonfrese, R. R. B.

    1980-01-01

    The spherical Earth inversion, modeling, and contouring software were tested and modified for processing data in the Southern Hemisphere. Preliminary geologic/tectonic maps and selected cross sections for South and Central America and the Caribbean region are being compiled and as well as gravity and magnetic models for the major geological features of the area. A preliminary gravity model of the Andeas Beniff Zone was constructed so that the density columns east and west of the subducted plates are in approximate isostatic equilibrium. The magnetic anomaly for the corresponding magnetic model of the zone is being computed with the SPHERE program. A test tape containing global magnetic measurements was converted to a tape compatible with Purdue's CDC system. NOO data were screened for periods of high diurnal activity and reduced to anomaly form using the IGS-75 model. Magnetic intensity anomaly profiles were plotted on the conterminous U.S. map using the track lines as the anomaly base level. The transcontinental magnetic high seen in POGO and MAGSAT data is also represented in the NOO data.

  12. A Bayesian network model for predicting pregnancy after in vitro fertilization.

    PubMed

    Corani, G; Magli, C; Giusti, A; Gianaroli, L; Gambardella, L M

    2013-11-01

    We present a Bayesian network model for predicting the outcome of in vitro fertilization (IVF). The problem is characterized by a particular missingness process; we propose a simple but effective averaging approach which improves parameter estimates compared to the traditional MAP estimation. We present results with generated data and the analysis of a real data set. Moreover, we assess by means of a simulation study the effectiveness of the model in supporting the selection of the embryos to be transferred. © 2013 Elsevier Ltd. All rights reserved.

  13. Computational Design of Biomimetic Gels With Properties of Human Tissues

    DTIC Science & Technology

    2008-12-01

    poly(styrene-block- isoprene -block-styrene) copolymer or SIS in the I-selective solvent has been chosen as a model triblock copolymer for this study...our model. A A B B B RC Fig. 2. Schematic representation of A1B3A1 triblock copolymer mapped on DPD model. Poly(styrene-block- isoprene -block...Pa making G 2.25 times higher for c changes from 0.16 to 0.33 (density of styrene and isoprene blocks are taken to be 1.04 and 0.913 g/cm3

  14. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  15. A high-density SNP genetic linkage map for the silver-lipped pearl oyster, Pinctada maxima: a valuable resource for gene localisation and marker-assisted selection.

    PubMed

    Jones, David B; Jerry, Dean R; Khatkar, Mehar S; Raadsma, Herman W; Zenger, Kyall R

    2013-11-20

    The silver-lipped pearl oyster, Pinctada maxima, is an important tropical aquaculture species extensively farmed for the highly sought "South Sea" pearls. Traditional breeding programs have been initiated for this species in order to select for improved pearl quality, but many economic traits under selection are complex, polygenic and confounded with environmental factors, limiting the accuracy of selection. The incorporation of a marker-assisted selection (MAS) breeding approach would greatly benefit pearl breeding programs by allowing the direct selection of genes responsible for pearl quality. However, before MAS can be incorporated, substantial genomic resources such as genetic linkage maps need to be generated. The construction of a high-density genetic linkage map for P. maxima is not only essential for unravelling the genomic architecture of complex pearl quality traits, but also provides indispensable information on the genome structure of pearl oysters. A total of 1,189 informative genome-wide single nucleotide polymorphisms (SNPs) were incorporated into linkage map construction. The final linkage map consisted of 887 SNPs in 14 linkage groups, spans a total genetic distance of 831.7 centimorgans (cM), and covers an estimated 96% of the P. maxima genome. Assessment of sex-specific recombination across all linkage groups revealed limited overall heterochiasmy between the sexes (i.e. 1.15:1 F/M map length ratio). However, there were pronounced localised differences throughout the linkage groups, whereby male recombination was suppressed near the centromeres compared to female recombination, but inflated towards telomeric regions. Mean values of LD for adjacent SNP pairs suggest that a higher density of markers will be required for powerful genome-wide association studies. Finally, numerous nacre biomineralization genes were localised providing novel positional information for these genes. This high-density SNP genetic map is the first comprehensive linkage map for any pearl oyster species. It provides an essential genomic tool facilitating studies investigating the genomic architecture of complex trait variation and identifying quantitative trait loci for economically important traits useful in genetic selection programs within the P. maxima pearling industry. Furthermore, this map provides a foundation for further research aiming to improve our understanding of the dynamic process of biomineralization, and pearl oyster evolution and synteny.

  16. National Maps - Pacific - NOAA's National Weather Service

    Science.gov Websites

    select the go button to submit request City, St Go Sign-up for Email Alerts RSS Feeds RSS Feeds Warnings Skip Navigation Links weather.gov NOAA logo-Select to go to the NOAA homepage National Oceanic and Atmospheric Administration's Select to go to the NWS homepage National Weather Service Site Map News

  17. High resolution regional soil carbon mapping in Madagascar : towards easy to update maps

    NASA Astrophysics Data System (ADS)

    Grinand, Clovis; Dessay, Nadine; Razafimbelo, Tantely; Razakamanarivo, Herintsitoaina; Albrecht, Alain; Vaudry, Romuald; Tiberghien, Matthieu; Rasamoelina, Maminiaina; Bernoux, Martial

    2013-04-01

    The soil organic carbon plays an important role in climate change regulation through carbon emissions and sequestration due to land use changes, notably tropical deforestation. Monitoring soil carbon emissions from shifting-cultivation requires to evaluate the amount of carbon stored at plot scale with a sufficient level of accuracy to be able to detect changes. The objective of this work was to map soil carbon stocks (30 cm and 100 cm depths) for different land use at regional scale using high resolution satellite dataset. The Andohahela National Parc and its surroundings (South-Est Madagascar) - a region with the largest deforestation rate in the country - was selected as a pilot area for the development of the methodology. A three steps approach was set up: (i) carbon inventory using mid infra-red spectroscopy and stock calculation, (ii) spatial data processing and (iii) modeling and mapping. Soil spectroscopy was successfully used for measuring organic carbon in this region. The results show that Random Forest was the inference model that produced the best estimates on calibration and validation datasets. By using a simple and robust method, we estimated uncertainty levels of of 35% and 43% for 30-cm and 100-cm carbon maps respectively. The approach developed in this study was based on open data and open source software that can be easily replicated to other regions and for other time periods using updated satellite images.

  18. The GIS map coloring support decision-making system based on case-based reasoning and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Deng, Shuang; Xiang, Wenting; Tian, Yangge

    2009-10-01

    Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.

  19. Flood-inundation map and water-surface profiles for floods of selected recurrence intervals, Consumnes River and Deer Creek, Sacramento County, California

    USGS Publications Warehouse

    Guay, Joel R.; Harmon, Jerry G.; McPherson, Kelly R.

    1998-01-01

    The damage caused by the January 1997 floods along the Cosumnes River and Deer Creek generated new interest in planning and managing land use in the study area. The 1997 floodflow peak, the highest on record and considered to be a 150-year flood, caused levee failures at 24 locations. In order to provide a technical basis for floodplain management practices, the U.S. Goelogical Survey, in cooperation with the Federal Emergency Management Agency, completed a flood-inundation map of the Cosumnes River and Deer Creek drainage from Dillard Road bridge to State Highway 99. Flood frequency was estimated from streamflow records for the Cosumnes River at Michigan Bar and Deer Creek near Sloughhouse. Cross sections along a study reach, where the two rivers generally flow parallel to one another, were used with a step-backwater model (WSPRO) to estimate the water-surface profile for floods of selected recurrence intervals. A flood-inundation map was developed to show flood boundaries for the 100-year flood. Water-surface profiles were developed for the 5-, 10-, 50-, 100-, and 500-year floods.

  20. Retrieval and Mapping of Soil Texture Based on Land Surface Diurnal Temperature Range Data from MODIS

    PubMed Central

    Wang, De-Cai; Zhang, Gan-Lin; Zhao, Ming-Song; Pan, Xian-Zhang; Zhao, Yu-Guo; Li, De-Cheng; Macmillan, Bob

    2015-01-01

    Numerous studies have investigated the direct retrieval of soil properties, including soil texture, using remotely sensed images. However, few have considered how soil properties influence dynamic changes in remote images or how soil processes affect the characteristics of the spectrum. This study investigated a new method for mapping regional soil texture based on the hypothesis that the rate of change of land surface temperature is related to soil texture, given the assumption of similar starting soil moisture conditions. The study area was a typical flat area in the Yangtze-Huai River Plain, East China. We used the widely available land surface temperature product of MODIS as the main data source. We analyzed the relationships between the content of different particle soil size fractions at the soil surface and land surface day temperature, night temperature and diurnal temperature range (DTR) during three selected time periods. These periods occurred after rainfalls and between the previous harvest and the subsequent autumn sowing in 2004, 2007 and 2008. Then, linear regression models were developed between the land surface DTR and sand (> 0.05 mm), clay (< 0.001 mm) and physical clay (< 0.01 mm) contents. The models for each day were used to estimate soil texture. The spatial distribution of soil texture from the studied area was mapped based on the model with the minimum RMSE. A validation dataset produced error estimates for the predicted maps of sand, clay and physical clay, expressed as RMSE of 10.69%, 4.57%, and 12.99%, respectively. The absolute error of the predictions is largely influenced by variations in land cover. Additionally, the maps produced by the models illustrate the natural spatial continuity of soil texture. This study demonstrates the potential for digitally mapping regional soil texture variations in flat areas using readily available MODIS data. PMID:26090852

  1. Flood-inundation maps for the White River at Spencer, Indiana

    USGS Publications Warehouse

    Nystrom, Elizabeth A.

    2013-01-01

    Digital flood-inundation maps for a 5.3-mile reach of the White River at Spencer, Indiana, were created by the U.S. Geological Survey (USGS) in cooperation with the Indiana Office of Community and Rural Affairs. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgage White River at Spencer, Indiana (sta. no. 03357000). Current conditions for estimating near-real-time areas of inundation using USGS streamgage information may be obtained on the Internet at http://waterdata.usgs.gov/. National Weather Service (NWS)-forecasted peak-stage inforamation may be used in conjunction with the maps developed in this study to show predicted areas of flood inundation. In this study, flood profiles were computed for the stream reach by means of a one-dimensional step-backwater model. The model was calibrated by using the most current stage-discharge relation at the White River at Spencer, Indiana, streamgage and documented high-water marks from the flood of June 8, 2008. The hydraulic model was then used to compute 20 water-surface profiles for flood stages at 1-foot intervals referenced to the streamgage datum and ranging from the NWS action stage (9 feet) to the highest rated stage (28 feet) at the streamgage. The simulated water-surface profiles were then combined with a geographic information system digital elevation model (derived from Light Detection and Ranging (LiDAR) data) in order to delineate the area flooded at each water level. The availability of these maps along with Internet information regarding the current stage from the Spencer USGS streamgage and forecasted stream stages from the NWS will provide emergency management personnel and residents with information that is critical for flood response activities, such as evacuations and road closures, as well as for post-flood recovery efforts.

  2. Retrieval and Mapping of Soil Texture Based on Land Surface Diurnal Temperature Range Data from MODIS.

    PubMed

    Wang, De-Cai; Zhang, Gan-Lin; Zhao, Ming-Song; Pan, Xian-Zhang; Zhao, Yu-Guo; Li, De-Cheng; Macmillan, Bob

    2015-01-01

    Numerous studies have investigated the direct retrieval of soil properties, including soil texture, using remotely sensed images. However, few have considered how soil properties influence dynamic changes in remote images or how soil processes affect the characteristics of the spectrum. This study investigated a new method for mapping regional soil texture based on the hypothesis that the rate of change of land surface temperature is related to soil texture, given the assumption of similar starting soil moisture conditions. The study area was a typical flat area in the Yangtze-Huai River Plain, East China. We used the widely available land surface temperature product of MODIS as the main data source. We analyzed the relationships between the content of different particle soil size fractions at the soil surface and land surface day temperature, night temperature and diurnal temperature range (DTR) during three selected time periods. These periods occurred after rainfalls and between the previous harvest and the subsequent autumn sowing in 2004, 2007 and 2008. Then, linear regression models were developed between the land surface DTR and sand (> 0.05 mm), clay (< 0.001 mm) and physical clay (< 0.01 mm) contents. The models for each day were used to estimate soil texture. The spatial distribution of soil texture from the studied area was mapped based on the model with the minimum RMSE. A validation dataset produced error estimates for the predicted maps of sand, clay and physical clay, expressed as RMSE of 10.69%, 4.57%, and 12.99%, respectively. The absolute error of the predictions is largely influenced by variations in land cover. Additionally, the maps produced by the models illustrate the natural spatial continuity of soil texture. This study demonstrates the potential for digitally mapping regional soil texture variations in flat areas using readily available MODIS data.

  3. Subsurface Investigation of the Neogene Mygdonian Basin, Greece Using Magnetic Data

    NASA Astrophysics Data System (ADS)

    Ibraheem, Ismael M.; Gurk, Marcus; Tougiannidis, Nikolaos; Tezkan, Bülent

    2018-02-01

    A high-resolution ground and marine magnetic survey was executed to determine the structure of the subsurface and the thickness of the sedimentary cover in the Mygdonian Basin. A spacing of approximately 250 m or 500 m between measurement stations was selected to cover an area of 15 km × 22 km. Edge detectors such as total horizontal derivative (THDR), analytic signal (AS), tilt derivative (TDR), enhanced total horizontal gradient of tilt derivative (ETHDR) were applied to map the subsurface structure. Depth was estimated by power spectrum analysis, tilt derivative, source parameter imaging (SPI), and 2D-forward modeling techniques. Spectral analysis and SPI suggest a depth to the basement ranging from near surface to 600 m. For some selected locations, depth was also calculated using the TDR technique suggesting depths from 160 to 400 m. 2D forward magnetic modeling using existing boreholes as constraints was carried out along four selected profiles and confirmed the presence of alternative horsts and grabens formed by parallel normal faults. The dominant structural trends inferred from THDR, AS, TDR, and ETHDR are N-S, NW-SE, NE-SW and E-W. This corresponds with the known structural trends in the area. Finally, a detailed structural map showing the magnetic blocks and the structural architecture of the Mygdonian Basin was drawn up by collating all of the results.

  4. Regional mapping of hydrothermally altered igneous rocks along the Urumieh-Dokhtar, Chagai, and Alborz Belts of western Asia using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and Interactive Data Language (IDL) logical operators: a tool for porphyry copper exploration and assessment: Chapter O in Global mineral resource assessment

    USGS Publications Warehouse

    Mars, John L.; Zientek, M.L.; Hammarstrom, J.M.; Johnson, K.M.; Pierce, F.W.

    2014-01-01

    The ASTER alteration map and corresponding geologic maps were used to select circular to elliptical patterns of argillic- and phyllic-altered volcanic and intrusive rocks as potential porphyry copper sites. One hundred and seventy eight potential porphyry copper sites were mapped along the UDVB, and 23 sites were mapped along the CVB. The potential sites were selected to assist in further exploration and assessments of undiscovered porphyry copper deposits.

  5. Impacts of Climate Change on Native Landcover: Seeking Future Climatic Refuges

    PubMed Central

    Mangabeira Albernaz, Ana Luisa

    2016-01-01

    Climate change is a driver for diverse impacts on global biodiversity. We investigated its impacts on native landcover distribution in South America, seeking to predict its effect as a new force driving habitat loss and population isolation. Moreover, we mapped potential future climatic refuges, which are likely to be key areas for biodiversity conservation under climate change scenarios. Climatically similar native landcovers were aggregated using a decision tree, generating a reclassified landcover map, from which 25% of the map’s coverage was randomly selected to fuel distribution models. We selected the best geographical distribution models among twelve techniques, validating the predicted distribution for current climate with the landcover map and used the best technique to predict the future distribution. All landcover categories showed changes in area and displacement of the latitudinal/longitudinal centroid. Closed vegetation was the only landcover type predicted to expand its distributional range. The range contractions predicted for other categories were intense, even suggesting extirpation of the sparse vegetation category. The landcover refuges under future climate change represent a small proportion of the South American area and they are disproportionately represented and unevenly distributed, predominantly occupying five of 26 South American countries. The predicted changes, regardless of their direction and intensity, can put biodiversity at risk because they are expected to occur in the near future in terms of the temporal scales of ecological and evolutionary processes. Recognition of the threat of climate change allows more efficient conservation actions. PMID:27618445

  6. Landslide susceptibility mapping in three selected target zones in Afghanistan

    NASA Astrophysics Data System (ADS)

    Schwanghart, Wolfgang; Seegers, Joe; Zeilinger, Gerold

    2015-04-01

    In May 2014, a large and mobile landslide destroyed the village Ab Barek, a village in Badakshan Province, Afghanistan. The landslide caused several hundred fatalities and once again demonstrated the vulnerability of Afghanistan's population to extreme natural events following more than 30 years of civil war and violent conflict. Increasing the capacity of Afghanistan's population by strengthening the disaster preparedness and management of responsible government authorities and institutions is thus a major component of international cooperation and development strategies. Afghanistan is characterized by high relief and widely varying rock types that largely determine the spatial distribution as well as emplacement modes of mass movements. The major aim of our study is to characterize this variability by conducting a landslide susceptibility analysis in three selected target zones: Greater Kabul Area, Badakhshan Province and Takhar Province. We expand on an existing landslide database by mapping landforms diagnostic for landslides (e.g. head scarps, normal faults and tension cracks), and historical landslide scars and landslide deposits by visual interpretation of high-resolution satellite imagery. We conduct magnitude frequency analysis within subregional physiogeographic classes based on geological maps, climatological and topographic data to identify regional parameters influencing landslide magnitude and frequency. In addition, we prepare a landslide susceptibility map for each area using the Weight-of-Evidence model. Preliminary results show that the three selected target zones vastly differ in modes of landsliding. Low magnitude but frequent rockfall events are a major hazard in the Greater Kabul Area threatening buildings and infrastructure encroaching steep terrain in the city's outskirts. Mass movements in loess covered areas of Badakshan are characterized by medium to large magnitudes. This spatial variability of characteristic landslide magnitudes and modes of emplacement necessitates different strategies to assess, mitigate, and prepare for landslides in the three different target zones.

  7. Using stochastic gradient boosting to infer stopover habitat selection and distribution of Hooded Cranes Grus monacha during spring migration in Lindian, Northeast China.

    PubMed

    Cai, Tianlong; Huettmann, Falk; Guo, Yumin

    2014-01-01

    The Hooded Crane (Grus monacha) is a globally vulnerable species, and habitat loss is the primary cause of its decline. To date, little is known regarding the specific habitat needs, and stopover habitat selection in particular, of the Hooded Crane. In this study we used stochastic gradient boosting (TreeNet) to develop three specific habitat selection models for roosting, daytime resting, and feeding site selection. In addition, we used a geographic information system (GIS) combined with TreeNet to develop a species distribution model. We also generated a digital map of the relative occurrence index (ROI) of this species at daytime resting sites in the study area. Our study indicated that the water depth, distance to village, coverage of deciduous leaves, open water area, and density of plants were the major predictors of roosting site selection. For daytime resting site selection, the distance to wetland, distance to farmland, and distance to road were the primary predictors. For feeding site selection, the distance to road, quantity of food, plant coverage, distance to village, plant density, distance to wetland, and distance to river were contributing factors, and the distance to road and quantity of food were the most important predictors. The predictive map showed that there were two consistent multi-year daytime resting sites in our study area. Our field work in 2013 using systematic ground-truthing confirmed that this prediction was accurate. Based on this study, we suggest that Lindian plays an important role for migratory birds and that cultivation practices should be adjusted locally. Furthermore, public education programs to promote the concept of the harmonious coexistence of humans and cranes can help successfully protect this species in the long term and eventually lead to its delisting by the IUCN.

  8. Using Stochastic Gradient Boosting to Infer Stopover Habitat Selection and Distribution of Hooded Cranes Grus monacha during Spring Migration in Lindian, Northeast China

    PubMed Central

    Cai, Tianlong; Huettmann, Falk; Guo, Yumin

    2014-01-01

    The Hooded Crane (Grus monacha) is a globally vulnerable species, and habitat loss is the primary cause of its decline. To date, little is known regarding the specific habitat needs, and stopover habitat selection in particular, of the Hooded Crane. In this study we used stochastic gradient boosting (TreeNet) to develop three specific habitat selection models for roosting, daytime resting, and feeding site selection. In addition, we used a geographic information system (GIS) combined with TreeNet to develop a species distribution model. We also generated a digital map of the relative occurrence index (ROI) of this species at daytime resting sites in the study area. Our study indicated that the water depth, distance to village, coverage of deciduous leaves, open water area, and density of plants were the major predictors of roosting site selection. For daytime resting site selection, the distance to wetland, distance to farmland, and distance to road were the primary predictors. For feeding site selection, the distance to road, quantity of food, plant coverage, distance to village, plant density, distance to wetland, and distance to river were contributing factors, and the distance to road and quantity of food were the most important predictors. The predictive map showed that there were two consistent multi-year daytime resting sites in our study area. Our field work in 2013 using systematic ground-truthing confirmed that this prediction was accurate. Based on this study, we suggest that Lindian plays an important role for migratory birds and that cultivation practices should be adjusted locally. Furthermore, public education programs to promote the concept of the harmonious coexistence of humans and cranes can help successfully protect this species in the long term and eventually lead to its delisting by the IUCN. PMID:24587118

  9. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    NASA Astrophysics Data System (ADS)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  10. An Agent-based Model for Groundwater Allocation and Management at the Bakken Shale in Western North Dakota

    NASA Astrophysics Data System (ADS)

    Lin, T.; Lin, Z.; Lim, S.

    2017-12-01

    We present an integrated modeling framework to simulate groundwater level change under the dramatic increase of hydraulic fracturing water use in the Bakken Shale oil production area. The framework combines the agent-based model (ABM) with the Fox Hills-Hell Creek (FH-HC) groundwater model. In development of the ABM, institution theory is used to model the regulation policies from the North Dakota State Water Commission, while evolutionary programming and cognitive maps are used to model the social structure that emerges from the behavior of competing individual water businesses. Evolutionary programming allows individuals to select an appropriate strategy when annually applying for potential water use permits; whereas cognitive maps endow agent's ability and willingness to compete for more water sales. All agents have their own influence boundaries that inhibit their competitive behavior toward their neighbors but not to non-neighbors. The decision-making process is constructed and parameterized with both quantitative and qualitative information, i.e., empirical water use data and knowledge gained from surveys with stakeholders. By linking institution theory, evolutionary programming, and cognitive maps, our approach addresses a higher complexity of the real decision making process. Furthermore, this approach is a new exploration for modeling the dynamics of Coupled Human and Natural System. After integrating ABM with the FH-HC model, drought and limited water accessibility scenarios are simulated to predict FH-HC ground water level changes in the future. The integrated modeling framework of ABM and FH-HC model can be used to support making scientifically sound policies in water allocation and management.

  11. Local search to improve coordinate-based task mapping

    DOE PAGES

    Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...

    2015-10-31

    We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less

  12. THERMAL-INERTIA MAPPING IN VEGETATED TERRAIN FROM HEAT CAPACITY MAPPING MISSION SATELLITE DATA.

    USGS Publications Warehouse

    Watson, Ken; Hummer-Miller, Susanne

    1984-01-01

    Thermal-inertia data, derived from the Heat Capacity Mapping Mission (HCMM) satellite, were analyzed in areas of varying amounts of vegetation cover. Thermal differences which appear to correlate with lithologic differences have been observed previously in areas of substantial vegetation cover. However, the energy exchange occurring within the canopy is much more complex than that used to develop the methods employed to produce thermal-inertia images. Because adequate models are lacking at present, the interpretation is largely dependent on comparison, correlation, and inference. Two study areas were selected in the western United States: the Richfield, Utah and the Silver City, Arizona-New Mexico, 1 degree multiplied by 2 degree quadrangles. Many thermal-inertia highs were found to be associated with geologic-unit boundaries, faults, and ridges. Lows occur in valleys with residual soil cover.

  13. The IRAS radiation environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1978-01-01

    Orbital flux integration for three selected mission altitudes and geographic instantaneous flux-mapping for nominal flight-path altitude were used to determine the external charged particle radiation predicted for the Infrared Astronomy Satellite. A current field model was used for magnetic field definitions for three nominal circular trajectories and for the geographic mapping positions. Innovative analysis features introduced include (1) positional fluxes as a function of time and energy for the most severe pass through the South Atlantic Anomaly; (2) total positional doses as a function of time and shield thickness; (3) comparison mapping fluxes for ratios of positional intensities to orbit integrated averages; and (4) statistical exposure-time history of a trajectory as a function of energy indicating, in percent of total mission duration, the time intervals over which the instantaneous fluxes would exceed the orbit integrated averages. Results are presented in tables and graphs.

  14. Genome-wide distribution of genetic diversity and linkage disequilibrium in a mass-selected population of maritime pine

    PubMed Central

    2014-01-01

    Background The accessibility of high-throughput genotyping technologies has contributed greatly to the development of genomic resources in non-model organisms. High-density genotyping arrays have only recently been developed for some economically important species such as conifers. The potential for using genomic technologies in association mapping and breeding depends largely on the genome wide patterns of diversity and linkage disequilibrium in current breeding populations. This study aims to deepen our knowledge regarding these issues in maritime pine, the first species used for reforestation in south western Europe. Results Using a new map merging algorithm, we first established a 1,712 cM composite linkage map (comprising 1,838 SNP markers in 12 linkage groups) by bringing together three already available genetic maps. Using rigorous statistical testing based on kernel density estimation and resampling we identified cold and hot spots of recombination. In parallel, 186 unrelated trees of a mass-selected population were genotyped using a 12k-SNP array. A total of 2,600 informative SNPs allowed to describe historical recombination, genetic diversity and genetic structure of this recently domesticated breeding pool that forms the basis of much of the current and future breeding of this species. We observe very low levels of population genetic structure and find no evidence that artificial selection has caused a reduction in genetic diversity. By combining these two pieces of information, we provided the map position of 1,671 SNPs corresponding to 1,192 different loci. This made it possible to analyze the spatial pattern of genetic diversity (H e ) and long distance linkage disequilibrium (LD) along the chromosomes. We found no particular pattern in the empirical variogram of H e across the 12 linkage groups and, as expected for an outcrossing species with large effective population size, we observed an almost complete lack of long distance LD. Conclusions These results are a stepping stone for the development of strategies for studies in population genomics, association mapping and genomic prediction in this economical and ecologically important forest tree species. PMID:24581176

  15. Candidate gene association mapping of Sclerotinia stalk rot resistance in sunflower (Helianthus annuus L.) uncovers the importance of COI1 homologs.

    PubMed

    Talukder, Zahirul I; Hulke, Brent S; Qi, Lili; Scheffler, Brian E; Pegadaraju, Venkatramana; McPhee, Kevin; Gulya, Thomas J

    2014-01-01

    Functional markers for Sclerotinia basal stalk rot resistance in sunflower were obtained using gene-level information from the model species Arabidopsis thaliana. Sclerotinia stalk rot, caused by Sclerotinia sclerotiorum, is one of the most destructive diseases of sunflower (Helianthus annuus L.) worldwide. Markers for genes controlling resistance to S. sclerotiorum will enable efficient marker-assisted selection (MAS). We sequenced eight candidate genes homologous to Arabidopsis thaliana defense genes known to be associated with Sclerotinia disease resistance in a sunflower association mapping population evaluated for Sclerotinia stalk rot resistance. The total candidate gene sequence regions covered a concatenated length of 3,791 bp per individual. A total of 187 polymorphic sites were detected for all candidate gene sequences, 149 of which were single nucleotide polymorphisms (SNPs) and 38 were insertions/deletions. Eight SNPs in the coding regions led to changes in amino acid codons. Linkage disequilibrium decay throughout the candidate gene regions declined on average to an r (2) = 0.2 for genetic intervals of 120 bp, but extended up to 350 bp with r (2) = 0.1. A general linear model with modification to account for population structure was found the best fitting model for this population and was used for association mapping. Both HaCOI1-1 and HaCOI1-2 were found to be strongly associated with Sclerotinia stalk rot resistance and explained 7.4 % of phenotypic variation in this population. These SNP markers associated with Sclerotinia stalk rot resistance can potentially be applied to the selection of favorable genotypes, which will significantly improve the efficiency of MAS during the development of stalk rot resistant cultivars.

  16. An alternative validation strategy for the Planck cluster catalogue and y-distortion maps

    NASA Astrophysics Data System (ADS)

    Khatri, Rishi

    2016-07-01

    We present an all-sky map of the y-type distortion calculated from the full mission Planck High Frequency Instrument (HFI) data using the recently proposed approach to component separation, which is based on parametric model fitting and model selection. This simple model-selection approach enables us to distinguish between carbon monoxide (CO) line emission and y-type distortion, something that is not possible using the internal linear combination based methods. We create a mask to cover the regions of significant CO emission relying on the information in the χ2 map that was obtained when fitting for the y-distortion and CO emission to the lowest four HFI channels. We revisit the second Planck cluster catalogue and try to quantify the quality of the cluster candidates in an approach that is similar in spirit to Aghanim et al. (2015, A&A, 580, A138). We find that at least 93% of the clusters in the cosmology sample are free of CO contamination. We also find that 59% of unconfirmed candidates may have significant contamination from molecular clouds. We agree with Planck Collaboration XXVII (2016, A&A, in press) on the worst offenders. We suggest an alternative validation strategy of measuring and subtracting the CO emission from the Planck cluster candidates using radio telescopes, thus improving the reliability of the catalogue. Our CO mask and annotations to the Planck cluster catalogue, identifying cluster candidates with possible CO contamination, are made publicly available. The full Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A48

  17. Large-scale modeling of the primary visual cortex: influence of cortical architecture upon neuronal response.

    PubMed

    McLaughlin, David; Shapley, Robert; Shelley, Michael

    2003-01-01

    A large-scale computational model of a local patch of input layer 4 [Formula: see text] of the primary visual cortex (V1) of the macaque monkey, together with a coarse-grained reduction of the model, are used to understand potential effects of cortical architecture upon neuronal performance. Both the large-scale point neuron model and its asymptotic reduction are described. The work focuses upon orientation preference and selectivity, and upon the spatial distribution of neuronal responses across the cortical layer. Emphasis is given to the role of cortical architecture (the geometry of synaptic connectivity, of the ordered and disordered structure of input feature maps, and of their interplay) as mechanisms underlying cortical responses within the model. Specifically: (i) Distinct characteristics of model neuronal responses (firing rates and orientation selectivity) as they depend upon the neuron's location within the cortical layer relative to the pinwheel centers of the map of orientation preference; (ii) A time independent (DC) elevation in cortico-cortical conductances within the model, in contrast to a "push-pull" antagonism between excitation and inhibition; (iii) The use of asymptotic analysis to unveil mechanisms which underly these performances of the model; (iv) A discussion of emerging experimental data. The work illustrates that large-scale scientific computation--coupled together with analytical reduction, mathematical analysis, and experimental data, can provide significant understanding and intuition about the possible mechanisms of cortical response. It also illustrates that the idealization which is a necessary part of theoretical modeling can outline in sharp relief the consequences of differing alternative interpretations and mechanisms--with final arbiter being a body of experimental evidence whose measurements address the consequences of these analyses.

  18. Evaluation of various modelling approaches in flood routing simulation and flood area mapping

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe

    2016-04-01

    An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.

  19. Self organising maps for visualising and modelling

    PubMed Central

    2012-01-01

    The paper describes the motivation of SOMs (Self Organising Maps) and how they are generally more accessible due to the wider available modern, more powerful, cost-effective computers. Their advantages compared to Principal Components Analysis and Partial Least Squares are discussed. These allow application to non-linear data, are not so dependent on least squares solutions, normality of errors and less influenced by outliers. In addition there are a wide variety of intuitive methods for visualisation that allow full use of the map space. Modern problems in analytical chemistry include applications to cultural heritage studies, environmental, metabolomic and biological problems result in complex datasets. Methods for visualising maps are described including best matching units, hit histograms, unified distance matrices and component planes. Supervised SOMs for classification including multifactor data and variable selection are discussed as is their use in Quality Control. The paper is illustrated using four case studies, namely the Near Infrared of food, the thermal analysis of polymers, metabolomic analysis of saliva using NMR, and on-line HPLC for pharmaceutical process monitoring. PMID:22594434

  20. Introgression of a Block of Genome Under Infinitesimal Selection.

    PubMed

    Sachdeva, Himani; Barton, Nicholas H

    2018-06-12

    Adaptive introgression is common in nature and can be driven by selection acting on multiple, linked genes. We explore the effects of polygenic selection on introgression under the infinitesimal model with linkage. This model assumes that the introgressing block has an effectively infinite number of loci, each with an infinitesimal effect on the trait under selection. The block is assumed to introgress under directional selection within a native population that is genetically homogeneous. We use individual-based simulations and a branching process approximation to compute various statistics of the introgressing block, and explore how these depend on parameters such as the map length and initial trait value associated with the introgressing block, the genetic variability along the block, and the strength of selection. Our results show that the introgression dynamics of a block under infinitesimal selection are qualitatively different from the dynamics of neutral introgression. We also find that in the long run, surviving descendant blocks are likely to have intermediate lengths, and clarify how their length is shaped by the interplay between linkage and infinitesimal selection. Our results suggest that it may be difficult to distinguish the long-term introgression of a block of genome with a single strongly selected locus from the introgression of a block with multiple, tightly linked and weakly selected loci. Copyright © 2018, Genetics.

  1. Big data integration for regional hydrostratigraphic mapping

    NASA Astrophysics Data System (ADS)

    Friedel, M. J.

    2013-12-01

    Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in devising these models remains subjective, nonunique, and uncertain. A novel geophysical-hydrogeologic data integration scheme is proposed to constrain the estimation of continuous HSUs. First, machine-learning and multivariate statistical techniques are used to simultaneously integrate borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Second, airborne electromagnetic measurements are numerically inverted to obtain subsurface resistivity structure at randomly selected locations. Third, the machine-learning algorithm is trained using the borehole hydrostratigraphic units and inverted airborne resistivity profiles. The trained machine-learning algorithm is then used to estimate HSUs at independent resistivity profile locations. We demonstrate efficacy of the proposed approach to map the hydrostratigraphy of a heterogeneous surficial aquifer in northwestern Nebraska.

  2. Accurate Mapping of Multilevel Rydberg Atoms on Interacting Spin-1 /2 Particles for the Quantum Simulation of Ising Models

    NASA Astrophysics Data System (ADS)

    de Léséleuc, Sylvain; Weber, Sebastian; Lienhard, Vincent; Barredo, Daniel; Büchler, Hans Peter; Lahaye, Thierry; Browaeys, Antoine

    2018-03-01

    We study a system of atoms that are laser driven to n D3 /2 Rydberg states and assess how accurately they can be mapped onto spin-1 /2 particles for the quantum simulation of anisotropic Ising magnets. Using nonperturbative calculations of the pair potentials between two atoms in the presence of electric and magnetic fields, we emphasize the importance of a careful selection of experimental parameters in order to maintain the Rydberg blockade and avoid excitation of unwanted Rydberg states. We benchmark these theoretical observations against experiments using two atoms. Finally, we show that in these conditions, the experimental dynamics observed after a quench is in good agreement with numerical simulations of spin-1 /2 Ising models in systems with up to 49 spins, for which numerical simulations become intractable.

  3. The efficacy of the 'mind map' study technique.

    PubMed

    Farrand, Paul; Hussain, Fearzana; Hennessy, Enid

    2002-05-01

    To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.

  4. Predicting habitat suitability for wildlife in southeastern Arizona using Geographic Information Systems: scaled quail, a case study

    Treesearch

    Kirby D. Bristow; Susan R. Boe; Richard A. Ockenfels

    2005-01-01

    Studies have used Geographic Information Systems (GIS) to evaluate habitat suitability for wildlife on a landscape scale, yet few have established the accuracy of these models. Based on documented habitat selection patterns of scaled quail (Callipepla squamata pallida), we produced GIS covers for several habitat parameters to create a map of...

  5. A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and Kappa

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen

    2008-01-01

    Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...

  6. Selective 2'-hydroxyl acylation analyzed by primer extension and mutational profiling (SHAPE-MaP) for direct, versatile and accurate RNA structure analysis.

    PubMed

    Smola, Matthew J; Rice, Greggory M; Busan, Steven; Siegfried, Nathan A; Weeks, Kevin M

    2015-11-01

    Selective 2'-hydroxyl acylation analyzed by primer extension (SHAPE) chemistries exploit small electrophilic reagents that react with 2'-hydroxyl groups to interrogate RNA structure at single-nucleotide resolution. Mutational profiling (MaP) identifies modified residues by using reverse transcriptase to misread a SHAPE-modified nucleotide and then counting the resulting mutations by massively parallel sequencing. The SHAPE-MaP approach measures the structure of large and transcriptome-wide systems as accurately as can be done for simple model RNAs. This protocol describes the experimental steps, implemented over 3 d, that are required to perform SHAPE probing and to construct multiplexed SHAPE-MaP libraries suitable for deep sequencing. Automated processing of MaP sequencing data is accomplished using two software packages. ShapeMapper converts raw sequencing files into mutational profiles, creates SHAPE reactivity plots and provides useful troubleshooting information. SuperFold uses these data to model RNA secondary structures, identify regions with well-defined structures and visualize probable and alternative helices, often in under 1 d. SHAPE-MaP can be used to make nucleotide-resolution biophysical measurements of individual RNA motifs, rare components of complex RNA ensembles and entire transcriptomes.

  7. Individual predictions of eye-movements with dynamic scenes

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Drewes, Jan; Martinetz, Thomas

    2003-06-01

    We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.

  8. Map the Permafrost and its Affected Soils and Vegetation on the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Sheng, Y.; Pang, Q.; Zou, D.; Wang, Z.; Li, W.; Wu, X.; Yue, G.; Fang, H.; Zhao, Y.

    2015-12-01

    Great amount of literatures had been published to deal with the actual distribution and changes of permafrost on the Tibetan Plateau (TP) on the basis of observed ground temperature dataset along Qinghai-Xizang Highway and/or Railway (QXH/R) during the last several decades. But there is very limited data available in the eastern part of the QXH/R and almost no observation in the western part of QXH/R not only for the observed permafrost data, but also for the dataset on ground surface conditions, such as soil and vegetation, which are used as model parameters, initial variables, or benchmark data sets for calibration, validation, and comparison in various Earth System Models (ESMs). To evaluate the status of permafrost and its environmental conditions, such as the distribution and thermal state of permafrost, soil and vegetation on the TP, detailed investigation on permafrost were conducted in 5 regions with different climatic and geologic conditions over the whole plateau from 2009 to 2013, and more than 100 ground temperatures (GTs) monitoring boreholes were drilled and equipped with thermistors, of which 10 sites were equipped with automatic meteorological stations. Geophysical prospecting methods, such as ground penetrating radar (GPR) and electromagnetic prospecting, were used in the same time to detect the permafrost distribution and thicknesses. The monitoring data revealed that the thermal state of permafrost was well correlated with elevation, and regulated by annual precipitation, local geological, geomorphological and hydrological conditions through heat exchanges between ground and atmosphere. Different models, including GTs statistical model, Common Land Surface Model (CoLM), Noah land surface model and TTOP models, were used to map the permafrost in 5 selected regions and the whole TP, while the investigated and monitored data were used as calibration and validation for all models. Finally, we compiled the permafrost map of the TP, soil and vegetation map within the permafrost regions on the TP. We also compiled the soil organic carbon density map of permafrost affected soils on the TP. An overview on permafrost thickness, GTs, ice content was statistically summarized based on investigation data.

  9. Stable learning of functional maps in self-organizing spiking neural networks with continuous synaptic plasticity

    PubMed Central

    Srinivasa, Narayan; Jiang, Qin

    2013-01-01

    This study describes a spiking model that self-organizes for stable formation and maintenance of orientation and ocular dominance maps in the visual cortex (V1). This self-organization process simulates three development phases: an early experience-independent phase, a late experience-independent phase and a subsequent refinement phase during which experience acts to shape the map properties. The ocular dominance maps that emerge accommodate the two sets of monocular inputs that arise from the lateral geniculate nucleus (LGN) to layer 4 of V1. The orientation selectivity maps that emerge feature well-developed iso-orientation domains and fractures. During the last two phases of development the orientation preferences at some locations appear to rotate continuously through ±180° along circular paths and referred to as pinwheel-like patterns but without any corresponding point discontinuities in the orientation gradient maps. The formation of these functional maps is driven by balanced excitatory and inhibitory currents that are established via synaptic plasticity based on spike timing for both excitatory and inhibitory synapses. The stability and maintenance of the formed maps with continuous synaptic plasticity is enabled by homeostasis caused by inhibitory plasticity. However, a prolonged exposure to repeated stimuli does alter the formed maps over time due to plasticity. The results from this study suggest that continuous synaptic plasticity in both excitatory neurons and interneurons could play a critical role in the formation, stability, and maintenance of functional maps in the cortex. PMID:23450808

  10. Probability of detecting atrazine/desethyl-atrazine and elevated concentrations of nitrate plus nitrate as nitrogen in ground water in the Idaho part of the western Snake River Plain

    USGS Publications Warehouse

    Donato, Mary M.

    2000-01-01

    As ground water continues to provide an ever-growing proportion of Idaho?s drinking water, concerns about the quality of that resource are increasing. Pesticides (most commonly, atrazine/desethyl-atrazine, hereafter referred to as atrazine) and nitrite plus nitrate as nitrogen (hereafter referred to as nitrate) have been detected in many aquifers in the State. To provide a sound hydrogeologic basis for atrazine and nitrate management in southern Idaho—the largest region of land and water use in the State—the U.S. Geological Survey produced maps showing the probability of detecting these contaminants in ground water in the upper Snake River Basin (published in a 1998 report) and the western Snake River Plain (published in this report). The atrazine probability map for the western Snake River Plain was constructed by overlaying ground-water quality data with hydrogeologic and anthropogenic data in a geographic information system (GIS). A data set was produced in which each well had corresponding information on land use, geology, precipitation, soil characteristics, regional depth to ground water, well depth, water level, and atrazine use. These data were analyzed by logistic regression using a statistical software package. Several preliminary multivariate models were developed and those that best predicted the detection of atrazine were selected. The multivariate models then were entered into a GIS and the probability maps were produced. Land use, precipitation, soil hydrologic group, and well depth were significantly correlated with atrazine detections in the western Snake River Plain. These variables also were important in the 1998 probability study of the upper Snake River Basin. The effectiveness of the probability models for atrazine might be improved if more detailed data were available for atrazine application. A preliminary atrazine probability map for the entire Snake River Plain in Idaho, based on a data set representing that region, also was produced. In areas where this map overlaps the 1998 map of the upper Snake River Basin, the two maps show broadly similar probabilities of detecting atrazine. Logistic regression also was used to develop a preliminary statistical model that predicts the probability of detecting elevated nitrate in the western Snake River Plain. A nitrate probability map was produced from this model. Results showed that elevated nitrate concentrations were correlated with land use, soil organic content, well depth, and water level. Detailed information on nitrate input, specifically fertilizer application, might have improved the effectiveness of this model.

  11. Spontaneously emerging direction selectivity maps in visual cortex through STDP.

    PubMed

    Wenisch, Oliver G; Noll, Joachim; Hemmen, J Leo van

    2005-10-01

    It is still an open question as to whether, and how, direction-selective neuronal responses in primary visual cortex are generated by feedforward thalamocortical or recurrent intracortical connections, or a combination of both. Here we present an investigation that concentrates on and, only for the sake of simplicity, restricts itself to intracortical circuits, in particular, with respect to the developmental aspects of direction selectivity through spike-timing-dependent synaptic plasticity. We show that directional responses can emerge in a recurrent network model of visual cortex with spiking neurons that integrate inputs mainly from a particular direction, thus giving rise to an asymmetrically shaped receptive field. A moving stimulus that enters the receptive field from this (preferred) direction will activate a neuron most strongly because of the increased number and/or strength of inputs from this direction and since delayed isotropic inhibition will neither overlap with, nor cancel excitation, as would be the case for other stimulus directions. It is demonstrated how direction-selective responses result from spatial asymmetries in the distribution of synaptic contacts or weights of inputs delivered to a neuron by slowly conducting intracortical axonal delay lines. By means of spike-timing-dependent synaptic plasticity with an asymmetric learning window this kind of coupling asymmetry develops naturally in a recurrent network of stochastically spiking neurons in a scenario where the neurons are activated by unidirectionally moving bar stimuli and even when only intrinsic spontaneous activity drives the learning process. We also present simulation results to show the ability of this model to produce direction preference maps similar to experimental findings.

  12. Genome-wide association mapping identifies multiple loci for a canine SLE-related disease complex.

    PubMed

    Wilbe, Maria; Jokinen, Päivi; Truvé, Katarina; Seppala, Eija H; Karlsson, Elinor K; Biagi, Tara; Hughes, Angela; Bannasch, Danika; Andersson, Göran; Hansson-Hamlin, Helene; Lohi, Hannes; Lindblad-Toh, Kerstin

    2010-03-01

    The unique canine breed structure makes dogs an excellent model for studying genetic diseases. Within a dog breed, linkage disequilibrium is extensive, enabling genome-wide association (GWA) with only around 15,000 SNPs and fewer individuals than in human studies. Incidences of specific diseases are elevated in different breeds, indicating that a few genetic risk factors might have accumulated through drift or selective breeding. In this study, a GWA study with 81 affected dogs (cases) and 57 controls from the Nova Scotia duck tolling retriever breed identified five loci associated with a canine systemic lupus erythematosus (SLE)-related disease complex that includes both antinuclear antibody (ANA)-positive immune-mediated rheumatic disease (IMRD) and steroid-responsive meningitis-arteritis (SRMA). Fine mapping with twice as many dogs validated these loci. Our results indicate that the homogeneity of strong genetic risk factors within dog breeds allows multigenic disorders to be mapped with fewer than 100 cases and 100 controls, making dogs an excellent model in which to identify pathways involved in human complex diseases.

  13. Late emergence of the vibrissa direction selectivity map in the rat barrel cortex.

    PubMed

    Kremer, Yves; Léger, Jean-François; Goodman, Dan; Brette, Romain; Bourdieu, Laurent

    2011-07-20

    In the neocortex, neuronal selectivities for multiple sensorimotor modalities are often distributed in topographical maps thought to emerge during a restricted period in early postnatal development. Rodent barrel cortex contains a somatotopic map for vibrissa identity, but the existence of maps representing other tactile features has not been clearly demonstrated. We addressed the issue of the existence in the rat cortex of an intrabarrel map for vibrissa movement direction using in vivo two-photon imaging. We discovered that the emergence of a direction map in rat barrel cortex occurs long after all known critical periods in the somatosensory system. This map is remarkably specific, taking a pinwheel-like form centered near the barrel center and aligned to the barrel cortex somatotopy. We suggest that this map may arise from intracortical mechanisms and demonstrate by simulation that the combination of spike-timing-dependent plasticity at synapses between layer 4 and layer 2/3 and realistic pad stimulation is sufficient to produce such a map. Its late emergence long after other classical maps suggests that experience-dependent map formation and refinement continue throughout adult life.

  14. Chromosome segregation drives division site selection in Streptococcus pneumoniae.

    PubMed

    van Raaphorst, Renske; Kjos, Morten; Veening, Jan-Willem

    2017-07-18

    Accurate spatial and temporal positioning of the tubulin-like protein FtsZ is key for proper bacterial cell division. Streptococcus pneumoniae (pneumococcus) is an oval-shaped, symmetrically dividing opportunistic human pathogen lacking the canonical systems for division site control (nucleoid occlusion and the Min-system). Recently, the early division protein MapZ was identified and implicated in pneumococcal division site selection. We show that MapZ is important for proper division plane selection; thus, the question remains as to what drives pneumococcal division site selection. By mapping the cell cycle in detail, we show that directly after replication both chromosomal origin regions localize to the future cell division sites, before FtsZ. Interestingly, Z-ring formation occurs coincidently with initiation of DNA replication. Perturbing the longitudinal chromosomal organization by mutating the condensin SMC, by CRISPR/Cas9-mediated chromosome cutting, or by poisoning DNA decatenation resulted in mistiming of MapZ and FtsZ positioning and subsequent cell elongation. Together, we demonstrate an intimate relationship between DNA replication, chromosome segregation, and division site selection in the pneumococcus, providing a simple way to ensure equally sized daughter cells.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Abdelaziz, Omar; Shrestha, Som S.

    Based on the laboratory investigation in FY16, for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), we used the DOE/ORNL Heat Pump Design Model to model the two RTUs and calibrated the models against the experimental data. Using the calibrated equipment models, we compared the compressor efficiencies, heat exchanger performances. An efficiency-based compressor mapping method was developed, which is able to predict compressor performances of the alternative low GWP refrigerants accurately. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting their preferred configurations at themore » same cooling capacity and compressor efficiencies.« less

  16. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  17. Evaluation of different machine learning models for predicting and mapping the susceptibility of gully erosion

    NASA Astrophysics Data System (ADS)

    Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar

    2017-12-01

    Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.

  18. Thermal Texture Selection and Correction for Building Facade Inspection Based on Thermal Radiant Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.

    2018-05-01

    An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.

  19. Assortative Mating: Encounter-Network Topology and the Evolution of Attractiveness

    PubMed Central

    Dipple, S.; Jia, T.; Caraco, T.; Korniss, G.; Szymanski, B. K.

    2017-01-01

    We model a social-encounter network where linked nodes match for reproduction in a manner depending probabilistically on each node’s attractiveness. The developed model reveals that increasing either the network’s mean degree or the “choosiness” exercised during pair formation increases the strength of positive assortative mating. That is, we note that attractiveness is correlated among mated nodes. Their total number also increases with mean degree and selectivity during pair formation. By iterating over the model’s mapping of parents onto offspring across generations, we study the evolution of attractiveness. Selection mediated by exclusion from reproduction increases mean attractiveness, but is rapidly balanced by skew in the offspring distribution of highly attractive mated pairs. PMID:28345625

  20. Atlas Based Segmentation and Mapping of Organs at Risk from Planning CT for the Development of Voxel-Wise Predictive Models of Toxicity in Prostate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Acosta, Oscar; Dowling, Jason; Cazoulat, Guillaume; Simon, Antoine; Salvado, Olivier; de Crevoisier, Renaud; Haigron, Pascal

    The prediction of toxicity is crucial to managing prostate cancer radiotherapy (RT). This prediction is classically organ wise and based on the dose volume histograms (DVH) computed during the planning step, and using for example the mathematical Lyman Normal Tissue Complication Probability (NTCP) model. However, these models lack spatial accuracy, do not take into account deformations and may be inappropiate to explain toxicity events related with the distribution of the delivered dose. Producing voxel wise statistical models of toxicity might help to explain the risks linked to the dose spatial distribution but is challenging due to the difficulties lying on the mapping of organs and dose in a common template. In this paper we investigate the use of atlas based methods to perform the non-rigid mapping and segmentation of the individuals' organs at risk (OAR) from CT scans. To build a labeled atlas, 19 CT scans were selected from a population of patients treated for prostate cancer by radiotherapy. The prostate and the OAR (Rectum, Bladder, Bones) were then manually delineated by an expert and constituted the training data. After a number of affine and non rigid registration iterations, an average image (template) representing the whole population was obtained. The amount of consensus between labels was used to generate probabilistic maps for each organ. We validated the accuracy of the approach by segmenting the organs using the training data in a leave one out scheme. The agreement between the volumes after deformable registration and the manually segmented organs was on average above 60% for the organs at risk. The proposed methodology provides a way to map the organs from a whole population on a single template and sets the stage to perform further voxel wise analysis. With this method new and accurate predictive models of toxicity will be built.

Top