Sample records for mapping model atamm

  1. ATAMM enhancement and multiprocessor performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamoy; Obando, Rodrigo; Malekpour, Mahyar R.; Jones, Robert L., III; Mandala, Brij Mohan V.

    1991-01-01

    ATAMM (Algorithm To Architecture Mapping Model) enhancement and multiprocessor performance evaluation is discussed. The following topics are included: the ATAMM model; ATAMM enhancement; ADM (Advanced Development Model) implementation of ATAMM; and ATAMM support tools.

  2. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  3. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  4. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  5. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  6. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  7. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  8. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  10. A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration

    NASA Technical Reports Server (NTRS)

    Obando, Rodrigo A.; Stoughton, John W.

    1995-01-01

    The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.

  11. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  12. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  13. A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration. Ph.D. Thesis Report, 1 Jan. - 31 Dec. 1992

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Obando, Rodrigo A.

    1993-01-01

    The modeling and design of a fault-tolerant multiprocessor system is addressed. In particular, the behavior of the system during recovery and restoration after a fault has occurred is investigated. Given that a multicomputer system is designed using the Algorithm to Architecture to Mapping Model (ATAMM), and that a fault (death of a computing resource) occurs during its normal steady-state operation, a model is presented as a viable research tool for predicting the performance bounds of the system during its recovery and restoration phases. Furthermore, the bounds of the performance behavior of the system during this transient mode can be assessed. These bounds include: time to recover from the fault (t(sub rec)), time to restore the system (t(sub rec)) and whether there is a permanent delay in the system's Time Between Input and Output (TBIO) after the system has reached a steady state. An implementation of an ATAMM based computer was developed with the Generic VHSIC Spaceborne Computer (GVSC) as the target system. A simulation of the GVSC was also written based on the code used in ATAMM Multicomputer Operating System (AMOS). The simulation is in turn used to validate the new model in the usefulness and accuracy in tracking the propagation of the delay through the system and predicting the behavior in the transient state of recovery and restoration. The model is validated as an accurate method to predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.

  14. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  16. Mentat/A: Medium grain parallel processing

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.

    1992-01-01

    The objective of this project is to test the Algorithm to Architecture Mapping Model (ATAMM) firing rules using the Mentat run-time system and the Mentat Programming Language (MPL). A special version of Mentat, Mentat/A (Mentat/ATAMM) was constructed. This required changes to: (1) modify the run-time system to control queue length and inhibit actor firing until required data tokens are available and space is available in the input queues of all of the direct descendent actors; (2) disallow the specification of persistent object classes in the MPL; and (3) permit only decision free graphs in the MPL. We were successful in implementing the spirit of the plan, although some goals changed as we came to better understand the problem. We report on what we accomplished and the lessons we learned. The Mentat/A run-time system is discussed, and we briefly present the compiler. We present results for three applications and conclude with a summary and some observations. Appendix A contains a list of technical reports and published papers partially supported by the grant. Appendix B contains listings for the three applications.

  17. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  18. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  19. CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.

    PubMed

    Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng

    2017-01-01

    Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.

  20. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.

  1. Tools for model-building with cryo-EM maps

    DOE PAGES

    Terwilliger, Thomas Charles

    2018-01-01

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  2. Tools for model-building with cryo-EM maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas Charles

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  3. Mapping wildland fuels for fire management across multiple scales: integrating remote sensing, GIS, and biophysical modeling

    USGS Publications Warehouse

    Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.

    2001-01-01

    Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.

  4. Reducing the Dynamical Degradation by Bi-Coupling Digital Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Liu, Bocheng; Hu, Hanping; Miao, Suoxia

    A chaotic map which is realized on a computer will suffer dynamical degradation. Here, a coupled chaotic model is proposed to reduce the dynamical degradation. In this model, the state variable of one digital chaotic map is used to control the parameter of the other digital map. This coupled model is universal and can be used for all chaotic maps. In this paper, two coupled models (one is coupled by two logistic maps, the other is coupled by Chebyshev map and Baker map) are performed, and the numerical experiments show that the performances of these two coupled chaotic maps are greatly improved. Furthermore, a simple pseudorandom bit generator (PRBG) based on coupled digital logistic maps is proposed as an application for our method.

  5. Experimentation of cooperative learning model Numbered Heads Together (NHT) type by concept maps and Teams Games Tournament (TGT) by concept maps in terms of students logical mathematics intellegences

    NASA Astrophysics Data System (ADS)

    Irawan, Adi; Mardiyana; Retno Sari Saputro, Dewi

    2017-06-01

    This research is aimed to find out the effect of learning model towards learning achievement in terms of students’ logical mathematics intelligences. The learning models that were compared were NHT by Concept Maps, TGT by Concept Maps, and Direct Learning model. This research was pseudo experimental by factorial design 3×3. The population of this research was all of the students of class XI Natural Sciences of Senior High School in all regency of Karanganyar in academic year 2016/2017. The conclusions of this research were: 1) the students’ achievements with NHT learning model by Concept Maps were better than students’ achievements with TGT model by Concept Maps and Direct Learning model. The students’ achievements with TGT model by Concept Maps were better than the students’ achievements with Direct Learning model. 2) The students’ achievements that exposed high logical mathematics intelligences were better than students’ medium and low logical mathematics intelligences. The students’ achievements that exposed medium logical mathematics intelligences were better than the students’ low logical mathematics intelligences. 3) Each of student logical mathematics intelligences with NHT learning model by Concept Maps has better achievement than students with TGT learning model by Concept Maps, students with NHT learning model by Concept Maps have better achievement than students with the direct learning model, and the students with TGT by Concept Maps learning model have better achievement than students with Direct Learning model. 4) Each of learning model, students who have logical mathematics intelligences have better achievement then students who have medium logical mathematics intelligences, and students who have medium logical mathematics intelligences have better achievement than students who have low logical mathematics intelligences.

  6. Tiled vector data model for the geographical features of symbolized maps.

    PubMed

    Li, Lin; Hu, Wei; Zhu, Haihong; Li, You; Zhang, Hang

    2017-01-01

    Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and 'addition' (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.

  7. FEM: Feature-enhanced map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  8. FEM: feature-enhanced map

    PubMed Central

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; Sobolev, Oleg V.; Terwilliger, Thomas C.; Turk, Dusan; Urzhumtsev, Alexandre; Adams, Paul D.

    2015-01-01

    A method is presented that modifies a 2m F obs − D F model σA-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretability and decreased model bias compared with the starting 2m F obs − D F model σA-weighted map. PMID:25760612

  9. FEM: Feature-enhanced map

    DOE PAGES

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; ...

    2015-02-26

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  10. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  11. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  12. Rapid Crop Cover Mapping for the Conterminous United States.

    PubMed

    Dahal, Devendra; Wylie, Bruce; Howard, Danny

    2018-06-05

    Timely crop cover maps with sufficient resolution are important components to various environmental planning and research applications. Through the modification and use of a previously developed crop classification model (CCM), which was originally developed to generate historical annual crop cover maps, we hypothesized that such crop cover maps could be generated rapidly during the growing season. Through a process of incrementally removing weekly and monthly independent variables from the CCM and implementing a 'two model mapping' approach, we found it viable to generate conterminous United States-wide rapid crop cover maps at a resolution of 250 m for the current year by the month of September. In this approach, we divided the CCM model into one 'crop type model' to handle the classification of nine specific crops and a second, binary model to classify the presence or absence of 'other' crops. Under the two model mapping approach, the training errors were 0.8% and 1.5% for the crop type and binary model, respectively, while test errors were 5.5% and 6.4%, respectively. With spatial mapping accuracies for annual maps reaching upwards of 70%, this approach demonstrated a strong potential for generating rapid crop cover maps by the 1 st of September.

  13. Polder maps: Improving OMIT maps by excluding bulk solvent

    DOE PAGES

    Liebschner, Dorothee; Afonine, Pavel V.; Moriarty, Nigel W.; ...

    2017-02-01

    The crystallographic maps that are routinely used during the structure-solution workflow are almost always model-biased because model information is used for their calculation. As these maps are also used to validate the atomic models that result from model building and refinement, this constitutes an immediate problem: anything added to the model will manifest itself in the map and thus hinder the validation. OMIT maps are a common tool to verify the presence of atoms in the model. The simplest way to compute an OMIT map is to exclude the atoms in question from the structure, update the corresponding structure factorsmore » and compute a residual map. It is then expected that if these atoms are present in the crystal structure, the electron density for the omitted atoms will be seen as positive features in this map. This, however, is complicated by the flat bulk-solvent model which is almost universally used in modern crystallographic refinement programs. This model postulates constant electron density at any voxel of the unit-cell volume that is not occupied by the atomic model. Consequently, if the density arising from the omitted atoms is weak then the bulk-solvent model may obscure it further. A possible solution to this problem is to prevent bulk solvent from entering the selected OMIT regions, which may improve the interpretative power of residual maps. This approach is called a polder (OMIT) map. Polder OMIT maps can be particularly useful for displaying weak densities of ligands, solvent molecules, side chains, alternative conformations and residues both in terminal regions and in loops. As a result, the tools described in this manuscript have been implemented and are available in PHENIX.« less

  14. Landscape scale mapping of forest inventory data by nearest neighbor classification

    Treesearch

    Andrew Lister

    2009-01-01

    One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...

  15. Assessment of tropospheric delay mapping function models in Egypt: Using PTD database model

    NASA Astrophysics Data System (ADS)

    Abdelfatah, M. A.; Mousa, Ashraf E.; El-Fiky, Gamal S.

    2018-06-01

    For space geodetic measurements, estimates of tropospheric delays are highly correlated with site coordinates and receiver clock biases. Thus, it is important to use the most accurate models for the tropospheric delay to reduce errors in the estimates of the other parameters. Both the zenith delay value and mapping function should be assigned correctly to reduce such errors. Several mapping function models can treat the troposphere slant delay. The recent models were not evaluated for the Egyptian local climate conditions. An assessment of these models is needed to choose the most suitable one. The goal of this paper is to test the quality of global mapping function which provides high consistency with precise troposphere delay (PTD) mapping functions. The PTD model is derived from radiosonde data using ray tracing, which consider in this paper as true value. The PTD mapping functions were compared, with three recent total mapping functions model and another three separate dry and wet mapping function model. The results of the research indicate that models are very close up to zenith angle 80°. Saastamoinen and 1/cos z model are behind accuracy. Niell model is better than VMF model. The model of Black and Eisner is a good model. The results also indicate that the geometric range error has insignificant effect on slant delay and the fluctuation of azimuth anti-symmetric is about 1%.

  16. Map Resource Packet: Course Models for the History-Social Science Framework, Grade Seven.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    This packet of maps is an auxiliary resource to the "World History and Geography: Medieval and Early Modern Times. Course Models for the History-Social Science Framework, Grade Seven." The set includes: outline, precipitation, and elevation maps; maps for locating key places; landform maps; and historical maps. The list of maps are…

  17. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  18. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  19. Variability of Protein Structure Models from Electron Microscopy.

    PubMed

    Monroe, Lyman; Terashi, Genki; Kihara, Daisuke

    2017-04-04

    An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. From conceptual modeling to a map

    NASA Astrophysics Data System (ADS)

    Gotlib, Dariusz; Olszewski, Robert

    2018-05-01

    Nowadays almost every map is a component of the information system. Design and production of maps requires the use of specific rules for modeling information systems: conceptual, application and data modelling. While analyzing various stages of cartographic modeling the authors ask the question: at what stage of this process a map occurs. Can we say that the "life of the map" begins even before someone define its form of presentation? This question is particularly important at the time of exponentially increasing number of new geoinformation products. During the analysis of the theory of cartography and relations of the discipline to other fields of knowledge it has been attempted to define a few properties of cartographic modeling which distinguish the process from other methods of spatial modeling. Assuming that the map is a model of reality (created in the process of cartographic modeling supported by domain-modeling) the article proposes an analogy of the process of cartographic modeling to the scheme of conceptual modeling presented in ISO 19101 standard.

  1. Some issues in data model mapping

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Alsabbagh, Jamal R.

    1985-01-01

    Numerous data models have been reported in the literature since the early 1970's. They have been used as database interfaces and as conceptual design tools. The mapping between schemas expressed according to the same data model or according to different models is interesting for theoretical and practical purposes. This paper addresses some of the issues involved in such a mapping. Of special interest are the identification of the mapping parameters and some current approaches for handling the various situations that require a mapping.

  2. Integrating satellite imagery with simulation modeling to improve burn severity mapping

    Treesearch

    Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon

    2014-01-01

    Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...

  3. Construction of adhesion maps for contacts between a sphere and a half-space: Considering size effects of the sphere.

    PubMed

    Zhang, Yuyan; Wang, Xiaoli; Li, Hanqing; Yang, Weixu

    2015-11-15

    Previous adhesion maps, such as the JG (Johnson-Greenwood) and YCG (Yao-Ciavarella-Gao) maps, are used to guide the selection of Bradley, DMT, M-D, JKR and Hertz models. However, when the size of the contact sphere decreases to the small scale, the applicability of JG and YCG maps is limited because the assumptions regarding the contact region profile, interaction between contact bodies and sphere shape in the classical models constituting these two maps are no longer valid. To avoid this limitation, in this paper, a new numerical model considering size effects of the sphere is established first and then introduced into the new adhesion maps together with the YGG (Yao-Guduru-Gao) model and Hertz model. Regimes of these models in the new map under a certain sphere radius are demarcated by the criteria related to the relative force differences and the ratio of contact radius to sphere radius. In addition, the approaches at pull-off, jump-in and jump-out for different Tabor parameters and sphere radii are provided in the new maps. Finally, to make the new maps more feasible, the numerical results of approaches, force and contact radius involved in the maps are formularized by using the piecewise fitting. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. The psychological four-color mapping problem.

    PubMed

    Francis, Gregory; Bias, Keri; Shive, Joshua

    2010-06-01

    Mathematicians have proven that four colors are sufficient to color 2-D maps so that no neighboring regions share the same color. Here we consider the psychological 4-color problem: Identifying which 4 colors should be used to make a map easy to use. We build a model of visual search for this design task and demonstrate how to apply it to the task of identifying the optimal colors for a map. We parameterized the model with a set of 7 colors using a visual search experiment in which human participants found a target region on a small map. We then used the model to predict search times for new maps and identified the color assignments that minimize or maximize average search time. The differences between these maps were predicted to be substantial. The model was then tested with a larger set of 31 colors on a map of English counties under conditions in which participants might memorize some aspects of the map. Empirical tests of the model showed that an optimally best colored version of this map is searched 15% faster than the correspondingly worst colored map. Thus, the color assignment seems to affect search times in a way predicted by the model, and this effect persists even when participants might use other sources of knowledge about target location. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  5. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  6. Species distribution modelling for plant communities: Stacked single species or multivariate modelling approaches?

    Treesearch

    Emilie B. Henderson; Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Harold S.J. Zald

    2014-01-01

    Landscape management and conservation planning require maps of vegetation composition and structure over large regions. Species distribution models (SDMs) are often used for individual species, but projects mapping multiple species are rarer. We compare maps of plant community composition assembled by stacking results from many SDMs with multivariate maps constructed...

  7. The mapping of eccentricity and meridional angle onto orthogonal axes in the primary visual cortex: an activity-dependent developmental model.

    PubMed

    Philips, Ryan T; Chakravarthy, V Srinivasa

    2015-01-01

    Primate vision research has shown that in the retinotopic map of the primary visual cortex, eccentricity and meridional angle are mapped onto two orthogonal axes: whereas the eccentricity is mapped onto the nasotemporal axis, the meridional angle is mapped onto the dorsoventral axis. Theoretically such a map has been approximated by a complex log map. Neural models with correlational learning have explained the development of other visual maps like orientation maps and ocular-dominance maps. In this paper it is demonstrated that activity based mechanisms can drive a self-organizing map (SOM) into such a configuration that dilations and rotations of a particular image (in this case a rectangular bar) are mapped onto orthogonal axes. We further demonstrate using the Laterally Interconnected Synergetically Self Organizing Map (LISSOM) model, with an appropriate boundary and realistic initial conditions, that a retinotopic map which maps eccentricity and meridional angle to the horizontal and vertical axes respectively can be developed. This developed map bears a strong resemblance to the complex log map. We also simulated lesion studies which indicate that the lateral excitatory connections play a crucial role in development of the retinotopic map.

  8. Averaged kick maps: less noise, more signal…and probably less bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor

    2009-09-01

    Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less

  9. A geomorphic approach to 100-year floodplain mapping for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth

    2018-06-01

    Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.

  10. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.

  11. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  12. Evaluation of using digital gravity field models for zoning map creation

    NASA Astrophysics Data System (ADS)

    Loginov, Dmitry

    2018-05-01

    At the present time the digital cartographic models of geophysical fields are taking a special significance into geo-physical mapping. One of the important directions to their application is the creation of zoning maps, which allow taking into account the morphology of geophysical field in the implementation automated choice of contour intervals. The purpose of this work is the comparative evaluation of various digital models in the creation of integrated gravity field zoning map. For comparison were chosen the digital model of gravity field of Russia, created by the analog map with scale of 1 : 2 500 000, and the open global model of gravity field of the Earth - WGM2012. As a result of experimental works the four integrated gravity field zoning maps were obtained with using raw and processed data on each gravity field model. The study demonstrates the possibility of open data use to create integrated zoning maps with the condition to eliminate noise component of model by processing in specialized software systems. In this case, for solving problem of contour intervals automated choice the open digital models aren't inferior to regional models of gravity field, created for individual countries. This fact allows asserting about universality and independence of integrated zoning maps creation regardless of detail of a digital cartographic model of geo-physical fields.

  13. a Model Study of Small-Scale World Map Generalization

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  14. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  15. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  16. Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models

    NASA Astrophysics Data System (ADS)

    Akgun, Aykut; Dag, Serhat; Bulut, Fikri

    2008-05-01

    Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model.

  17. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    USGS Publications Warehouse

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  18. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  19. Remanent magnetization and three-dimensional density model of the Kentucky anomaly region

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Existing software was modified to handle 3-D density and magnetization models of the Kentucky body and is being tested. Gravity and magnetic anomaly data sets are ready for use. A preliminary block model is under construction using the 1:1,000,000 maps. An x-y grid to overlay the 1:2,500,000 Albers maps and keyed to the 1:1,000,000 scale block models was created. Software was developed to generate a smoothed MAGSAT data set over this grid; this is to be input to an inversion program for generating the regional magnetization map. The regional scale 1:2,500,000 map mosaic is being digitized using previous magnetization models, the U.S. magnetic anomaly map, and regional tectonic maps as a guide.

  20. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  1. A Global Orientation Map in the Primary Visual Cortex (V1): Could a Self Organizing Model Reveal Its Hidden Bias?

    PubMed Central

    Philips, Ryan T.; Chakravarthy, V. Srinivasa

    2017-01-01

    A remarkable accomplishment of self organizing models is their ability to simulate the development of feature maps in the cortex. Additionally, these models have been trained to tease out the differential causes of multiple feature maps, mapped on to the same output space. Recently, a Laterally Interconnected Synergetically Self Organizing Map (LISSOM) model has been used to simulate the mapping of eccentricity and meridional angle onto orthogonal axes in the primary visual cortex (V1). This model is further probed to simulate the development of the radial bias in V1, using a training set that consists of both radial (rectangular bars of random size and orientation) as well as non-radial stimuli. The radial bias describes the preference of the visual system toward orientations that match the angular position (meridional angle) of that orientation with respect to the point of fixation. Recent fMRI results have shown that there exists a coarse scale orientation map in V1, which resembles the meridional angle map, thereby providing a plausible neural basis for the radial bias. The LISSOM model, trained for the development of the retinotopic map, on probing for orientation preference, exhibits a coarse scale orientation map, consistent with these experimental results, quantified using the circular cross correlation (rc). The rc between the orientation map developed on probing with a thin annular ring containing sinusoidal gratings with a spatial frequency of 0.5 cycles per degree (cpd) and the corresponding meridional map for the same annular ring, has a value of 0.8894. The results also suggest that the radial bias goes beyond the current understanding of a node to node correlation between the two maps. PMID:28111542

  2. A Global Orientation Map in the Primary Visual Cortex (V1): Could a Self Organizing Model Reveal Its Hidden Bias?

    PubMed

    Philips, Ryan T; Chakravarthy, V Srinivasa

    2016-01-01

    A remarkable accomplishment of self organizing models is their ability to simulate the development of feature maps in the cortex. Additionally, these models have been trained to tease out the differential causes of multiple feature maps, mapped on to the same output space. Recently, a Laterally Interconnected Synergetically Self Organizing Map (LISSOM) model has been used to simulate the mapping of eccentricity and meridional angle onto orthogonal axes in the primary visual cortex (V1). This model is further probed to simulate the development of the radial bias in V1, using a training set that consists of both radial (rectangular bars of random size and orientation) as well as non-radial stimuli. The radial bias describes the preference of the visual system toward orientations that match the angular position (meridional angle) of that orientation with respect to the point of fixation. Recent fMRI results have shown that there exists a coarse scale orientation map in V1, which resembles the meridional angle map, thereby providing a plausible neural basis for the radial bias. The LISSOM model, trained for the development of the retinotopic map, on probing for orientation preference, exhibits a coarse scale orientation map, consistent with these experimental results, quantified using the circular cross correlation ( r c ). The r c between the orientation map developed on probing with a thin annular ring containing sinusoidal gratings with a spatial frequency of 0.5 cycles per degree (cpd) and the corresponding meridional map for the same annular ring, has a value of 0.8894. The results also suggest that the radial bias goes beyond the current understanding of a node to node correlation between the two maps.

  3. Ecology and space: A case study in mapping harmful invasive species

    USGS Publications Warehouse

    David T. Barnett,; Jarnevich, Catherine S.; Chong, Geneva W.; Stohlgren, Thomas J.; Sunil Kumar,; Holcombe, Tracy R.; Brunn, Stanley D.; Dodge, Martin

    2017-01-01

    The establishment and invasion of non-native plant species have the ability to alter the composition of native species and functioning of ecological systems with financial costs resulting from mitigation and loss of ecological services. Spatially documenting invasions has applications for management and theory, but the utility of maps is challenged by availability and uncertainty of data, and the reliability of extrapolating mapped data in time and space. The extent and resolution of projections also impact the ability to inform invasive species science and management. Early invasive species maps were coarse-grained representations that underscored the phenomena, but had limited capacity to direct management aside from development of watch lists for priorities for prevention and containment. Integrating mapped data sets with fine-resolution environmental variables in the context of species-distribution models allows a description of species-environment relationships and an understanding of how, why, and where invasions may occur. As with maps, the extent and resolution of models impact the resulting insight. Models of cheatgrass (Bromus tectorum) across a variety of spatial scales and grain result in divergent species-environment relationships. New data can improve models and efficiently direct further inventories. Mapping can target areas of greater model uncertainty or the bounds of modeled distribution to efficiently refine models and maps. This iterative process results in dynamic, living maps capable of describing the ongoing process of species invasions.

  4. InMAP: A model for air pollution interventions

    DOE PAGES

    Tessum, Christopher W.; Hill, Jason D.; Marshall, Julian D.; ...

    2017-04-19

    Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. We present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical informationmore » from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons we run, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of -17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license.« less

  5. InMAP: A model for air pollution interventions

    PubMed Central

    Hill, Jason D.; Marshall, Julian D.

    2017-01-01

    Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. Here, we present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical information from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons run here, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of −17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license. PMID:28423049

  6. InMAP: A model for air pollution interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessum, Christopher W.; Hill, Jason D.; Marshall, Julian D.

    Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. We present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical informationmore » from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons we run, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of -17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license.« less

  7. Rapid crop cover mapping for the conterminous United States

    USGS Publications Warehouse

    Dahal, Devendra; Wylie, Bruce K.; Howard, Daniel

    2018-01-01

    Timely crop cover maps with sufficient resolution are important components to various environmental planning and research applications. Through the modification and use of a previously developed crop classification model (CCM), which was originally developed to generate historical annual crop cover maps, we hypothesized that such crop cover maps could be generated rapidly during the growing season. Through a process of incrementally removing weekly and monthly independent variables from the CCM and implementing a ‘two model mapping’ approach, we found it viable to generate conterminous United States-wide rapid crop cover maps at a resolution of 250 m for the current year by the month of September. In this approach, we divided the CCM model into one ‘crop type model’ to handle the classification of nine specific crops and a second, binary model to classify the presence or absence of ‘other’ crops. Under the two model mapping approach, the training errors were 0.8% and 1.5% for the crop type and binary model, respectively, while test errors were 5.5% and 6.4%, respectively. With spatial mapping accuracies for annual maps reaching upwards of 70%, this approach demonstrated a strong potential for generating rapid crop cover maps by the 1st of September.

  8. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.

  9. Road Map to Statewide Implementation of the Pyramid Model. Roadmap to Effective Intervention Practices #6

    ERIC Educational Resources Information Center

    Dunlap, Glen; Smith, Barbara J.; Fox, Lise; Blase, Karen

    2014-01-01

    This document is a guide--a "Road Map"--for implementing widespread use of the Pyramid Model for Promoting Social Emotional Competence in Infants and Young Children (http://www.challengingbehavior.org/do/pyramid_model. htm). It is a road map of systems change. The Road Map is written for statewide systems change, although it could be…

  10. Multi-Fidelity Simulation of a Turbofan Engine With Results Zoomed Into Mini-Maps for a Zero-D Cycle Simulation

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Reed, John A.; Ryder, Robert; Veres, Joseph P.

    2004-01-01

    A Zero-D cycle simulation of the GE90-94B high bypass turbofan engine has been achieved utilizing mini-maps generated from a high-fidelity simulation. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled 3D computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the 3D component models are integrated into the cycle model via partial performance maps generated from the CFD flow solutions using one-dimensional mean line turbomachinery programs. This paper highlights the generation of the high-pressure compressor, booster, and fan partial performance maps, as well as turbine maps for the high pressure and low pressure turbine. These are actually "mini-maps" in the sense that they are developed only for a narrow operating range of the component. Results are compared between actual cycle data at a take-off condition and the comparable condition utilizing these mini-maps. The mini-maps are also presented with comparison to actual component data where possible.

  11. Chaotic and stable perturbed maps: 2-cycles and spatial models

    NASA Astrophysics Data System (ADS)

    Braverman, E.; Haroutunian, J.

    2010-06-01

    As the growth rate parameter increases in the Ricker, logistic and some other maps, the models exhibit an irreversible period doubling route to chaos. If a constant positive perturbation is introduced, then the Ricker model (but not the classical logistic map) experiences period doubling reversals; the break of chaos finally gives birth to a stable two-cycle. We outline the maps which demonstrate a similar behavior and also study relevant discrete spatial models where the value in each cell at the next step is defined only by the values at the cell and its nearest neighbors. The stable 2-cycle in a scalar map does not necessarily imply 2-cyclic-type behavior in each cell for the spatial generalization of the map.

  12. Soil maps as data input for soil erosion models: errors related to map scales

    NASA Astrophysics Data System (ADS)

    van Dijk, Paul; Sauter, Joëlle; Hofstetter, Elodie

    2010-05-01

    Soil erosion rates depend in many ways on soil and soil surface characteristics which vary in space and in time. To account for spatial variations of soil features, most distributed soil erosion models require data input derived from soil maps. Ideally, the level of spatial detail contained in the applied soil map should correspond to the objective of the modelling study. However, often the model user has only one soil map available which is then applied without questioning its suitability. The present study seeks to determine in how far soil map scale can be a source of error in erosion model output. The study was conducted on two different spatial scales, with for each of them a convenient soil erosion model: a) the catchment scale using the physically-based Limbourg Soil Erosion Model (LISEM), and b) the regional scale using the decision-tree expert model MESALES. The suitability of the applied soil map was evaluated with respect to an imaginary though realistic study objective for both models: the definition of erosion control measures at strategic locations at the catchment scale; the identification of target areas for the definition of control measures strategies at the regional scale. Two catchments were selected to test the sensitivity of LISEM to the spatial detail contained in soil maps: one catchment with relatively little contrast in soil texture, dominated by loess-derived soil (south of the Alsace), and one catchment with strongly contrasted soils at the limit between the Alsatian piedmont and the loess-covered hills of the Kochersberg. LISEM was run for both catchments using different soil maps ranging in scale from 1/25 000 to 1/100 000 to derive soil related input parameters. The comparison of the output differences was used to quantify the map scale impact on the quality of the model output. The sensitivity of MESALES was tested on the Haut-Rhin county for which two soil maps are available for comparison: 1/50 000 and 1/100 000. The order of resulting target areas (communes) was compared to evaluate the error induced by using the coarser soil data at 1/100 000. Results shows that both models are sensitive to the soil map scale used for model data input. A low sensitivity was found for the catchment with relatively homogeneous soil textures and the use of 1/100 000 soil maps seems allowed. The results for the catchment with strong soil texture variations showed significant differences depending on soil map scale on 75% of the catchment area. Here, the use of 1/100 000 soil map will indeed lead to wrong erosion diagnostics and will hamper the definition of a sound erosion control strategy. The regional scale model MESALES proved to be very sensitive to soil information. The two soil related model parameters (crusting sensitivity, and soil erodibility) reacted very often in the same direction therewith amplifying the change in the final erosion hazard class. The 1/100 000 soil map yielded different results on 40% of the sloping area compared to the 1/50 000 map. Significant differences in the order of target areas were found as well. The present study shows that the degree of sensitivity of the model output to soil map scale is rather variable and depends partly on the spatial variability of soil texture within the study area. Soil (textural) diversity needs to be accounted for to assure a fruitful use of soil erosion models. In some situations this might imply that additional soil data need to be collected in the field to refine the available soil map.

  13. The Lunar Mapping and Modeling Project Update

    NASA Technical Reports Server (NTRS)

    Noble, S.; French, R.; Nall, M.; Muery, K.

    2010-01-01

    The Lunar Mapping and Modeling Project (LMMP) is managing the development of a suite of lunar mapping and modeling tools and data products that support lunar exploration activities, including the planning, design, development, test, and operations associated with crewed and/or robotic operations on the lunar surface. In addition, LMMP should prove to be a convenient and useful tool for scientific analysis and for education and public outreach (E/PO) activities. LMMP will utilize data predominately from the Lunar Reconnaissance Orbiter, but also historical and international lunar mission data (e.g. Lunar Prospector, Clementine, Apollo, Lunar Orbiter, Kaguya, and Chandrayaan-1) as available and appropriate. LMMP will provide such products as image mosaics, DEMs, hazard assessment maps, temperature maps, lighting maps and models, gravity models, and resource maps. We are working closely with the LRO team to prevent duplication of efforts and ensure the highest quality data products. A beta version of the LMMP software was released for limited distribution in December 2009, with the public release of version 1 expected in the Fall of 2010.

  14. Using concept maps to describe undergraduate students’ mental model in microbiology course

    NASA Astrophysics Data System (ADS)

    Hamdiyati, Y.; Sudargo, F.; Redjeki, S.; Fitriani, A.

    2018-05-01

    The purpose of this research was to describe students’ mental model in a mental model based-microbiology course using concept map as assessment tool. Respondents were 5th semester of undergraduate students of Biology Education of Universitas Pendidikan Indonesia. The mental modelling instrument used was concept maps. Data were taken on Bacteria sub subject. A concept map rubric was subsequently developed with a maximum score of 4. Quantitative data was converted into a qualitative one to determine mental model level, namely: emergent = score 1, transitional = score 2, close to extended = score 3, and extended = score 4. The results showed that mental model level on bacteria sub subject before the implementation of mental model based-microbiology course was at the transitional level. After implementation of mental model based-microbiology course, mental model was at transitional level, close to extended, and extended. This indicated an increase in the level of students’ mental model after the implementation of mental model based-microbiology course using concept map as assessment tool.

  15. Development of AHPDST Vulnerability Indexing Model for Groundwater Vulnerability Assessment Using Hydrogeophysical Derived Parameters and GIS Application

    NASA Astrophysics Data System (ADS)

    Mogaji, K. A.

    2017-04-01

    Producing a bias-free vulnerability assessment map model is significantly needed for planning a scheme of groundwater quality protection. This study developed a GIS-based AHPDST vulnerability index model for producing groundwater vulnerability model map in the hard rock terrain, Nigeria by exploiting the potentials of analytic hierarchy process (AHP) and Dempster-Shafer theory (DST) data mining models. The acquired borehole and geophysical data in the study area were processed to derive five groundwater vulnerability conditioning factors (GVCFs), namely recharge rate, aquifer transmissivity, hydraulic conductivity, transverse resistance and longitudinal conductance. The produced GVCFs' thematic maps were multi-criterially analyzed by employing the mechanisms of AHP and DST models to determine the normalized weight ( W) parameter for the GVCFs and mass function factors (MFFs) parameter for the GVCFs' thematic maps' class boundaries, respectively. Based on the application of the weighted linear average technique, the determined W and MFFs parameters were synthesized to develop groundwater vulnerability potential index (GVPI)-based AHPDST model algorithm. The developed model was applied to establish four GVPI mass/belief function indices. The estimates based on the applied GVPI belief function indices were processed in GIS environment to create prospective groundwater vulnerability potential index maps. The most representative of the resulting vulnerability maps (the GVPIBel map) was considered for producing the groundwater vulnerability potential zones (GVPZ) map for the area. The produced GVPZ map established 48 and 52% of the areal extent to be covered by the lows/moderate and highs vulnerable zones, respectively. The success and the prediction rates of the produced GVPZ map were determined using the relative operating characteristics technique to give 82.3 and 77.7%, respectively. The analyzed results reveal that the developed GVPI-based AHPDST model algorithm is capable of producing efficient groundwater vulnerability potential zones prediction map and characterizing the predicted zones uncertainty via the DST mechanism processes in the area. The produced GVPZ map in this study can be used by decision makers to formulate appropriate groundwater management strategies and the approach may be well opted in other hard rock regions of the world, especially in economically poor nations.

  16. A comparative survey of current and proposed tropospheric refraction-delay models for DSN radio metric data calibration

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Sovers, O. J.

    1994-01-01

    The standard tropospheric calibration model implemented in the operational Orbit Determination Program is the seasonal model developed by C. C. Chao in the early 1970's. The seasonal model has seen only slight modification since its release, particularly in the format and content of the zenith delay calibrations. Chao's most recent standard mapping tables, which are used to project the zenith delay calibrations along the station-to-spacecraft line of sight, have not been modified since they were first published in late 1972. This report focuses principally on proposed upgrades to the zenith delay mapping process, although modeling improvements to the zenith delay calibration process are also discussed. A number of candidate approximation models for the tropospheric mapping are evaluated, including the semi-analytic mapping function of Lanyi, and the semi-empirical mapping functions of Davis, et. al.('CfA-2.2'), of Ifadis (global solution model), of Herring ('MTT'), and of Niell ('NMF'). All of the candidate mapping functions are superior to the Chao standard mapping tables and approximation formulas when evaluated against the current Deep Space Network Mark 3 intercontinental very long baselines interferometry database.

  17. Modeling Research Project Risks with Fuzzy Maps

    ERIC Educational Resources Information Center

    Bodea, Constanta Nicoleta; Dascalu, Mariana Iuliana

    2009-01-01

    The authors propose a risks evaluation model for research projects. The model is based on fuzzy inference. The knowledge base for fuzzy process is built with a causal and cognitive map of risks. The map was especially developed for research projects, taken into account their typical lifecycle. The model was applied to an e-testing research…

  18. The MAP program: building the digital terrain model.

    Treesearch

    R.H. Twito; R.W. Mifflin; R.J. McGaughey

    1987-01-01

    PLANS, a software package for integrated timber-harvest planning, uses digital terrain models to provide the topographic data needed to fit harvest and transportation designs to specific terrain. MAP, an integral program in the PLANS package, is used to construct the digital terrain models required by PLANS. MAP establishes digital terrain models using digitizer-traced...

  19. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  20. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  1. MaMR: High-performance MapReduce programming model for material cloud applications

    NASA Astrophysics Data System (ADS)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  2. A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps

    NASA Astrophysics Data System (ADS)

    Brown, Scott

    Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.

  3. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  4. Comparison of an Atomic Model and Its Cryo-EM Image at the Central Axis of a Helix

    PubMed Central

    He, Jing; Zeil, Stephanie; Hallak, Hussam; McKaig, Kele; Kovacs, Julio; Wriggers, Willy

    2016-01-01

    Cryo-electron microscopy (cryo-EM) is an important biophysical technique that produces three-dimensional (3D) density maps at different resolutions. Because more and more models are being produced from cryo-EM density maps, validation of the models is becoming important. We propose a method for measuring local agreement between a model and the density map using the central axis of the helix. This method was tested using 19 helices from cryo-EM density maps between 5.5 Å and 7.2 Å resolution and 94 helices from simulated density maps. This method distinguished most of the well-fitting helices, although challenges exist for shorter helices. PMID:27280059

  5. Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Yeh, J. J. J.

    2017-12-01

    A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.

  6. A tool for teaching three-dimensional dermatomes combined with distribution of cutaneous nerves on the limbs.

    PubMed

    Kooloos, Jan G M; Vorstenbosch, Marc A T M

    2013-01-01

    A teaching tool that facilitates student understanding of a three-dimensional (3D) integration of dermatomes with peripheral cutaneous nerve field distributions is described. This model is inspired by the confusion in novice learners between dermatome maps and nerve field distribution maps. This confusion leads to the misconception that these two distribution maps fully overlap, and may stem from three sources: (1) the differences in dermatome maps in anatomical textbooks, (2) the limited views in the figures of dermatome maps and cutaneous nerve field maps, hampering the acquisition of a 3D picture, and (3) the lack of figures showing both maps together. To clarify this concept, the learning process can be facilitated by transforming the 2D drawings in textbooks to a 3D hands-on model and by merging the information from the separate maps. Commercially available models were covered with white cotton pantyhose, and borders between dermatomes were marked using the drawings from the students' required study material. Distribution maps of selected peripheral nerves were cut out from color transparencies. Both the model and the cut-out nerve fields were then at the students' disposal during a laboratory exercise. The students were instructed to affix the transparencies in the right place according to the textbook's figures. This model facilitates integrating the spatial relationships of the two types of nerve distributions. By highlighting the spatial relationship and aiming to provoke student enthusiasm, this model follows the advantages of other low-fidelity models. © 2013 American Association of Anatomists.

  7. Highly dissipative Hénon map behavior in the four-level model of the CO 2 laser with modulated losses

    NASA Astrophysics Data System (ADS)

    Pando L., C. L.; Acosta, G. A. Luna; Meucci, R.; Ciofini, M.

    1995-02-01

    We show that the four-level model for the CO 2 laser with modulated losses behaves in a qualitatively similar way as the highly dissipative Hénon map. The ubiquity of elements of the universal sequence, their related symbolic dynamics, and the presence of reverse bifurcations of chaotic bands in the model are reminiscent of the logistic map which is the limit of the Hénon map when the Jacobian equals zero. The coexistence of attractors, its dynamics related to contraction of volumes in phase space and the associated return maps can be correlated with those of the highly dissipative Hénon map.

  8. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  9. The 2008 U.S. Geological Survey national seismic hazard models and maps for the central and eastern United States

    USGS Publications Warehouse

    Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.

    2012-01-01

    In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.

  10. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    PubMed

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    PubMed

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Custom map projections for regional groundwater models

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2017-01-01

    For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.

  13. Methods of Technological Forecasting,

    DTIC Science & Technology

    1977-05-01

    Trend Extrapolation Progress Curve Analogy Trend Correlation Substitution Analysis or Substitution Growth Curves Envelope Curve Advances in the State of...the Art Technological Mapping Contextual Mapping Matrix Input-Output Analysis Mathematical Models Simulation Models Dynamic Modelling. CHAPTER IV...Generation Interaction between Needs and Possibilities Map of the Technological Future — (‘ross- Impact Matri x Discovery Matrix Morphological Analysis

  14. Painting a picture across the landscape with ModelMap

    Treesearch

    Brian Cooke; Elizabeth Freeman; Gretchen Moisen; Tracey Frescino

    2017-01-01

    Scientists and statisticians working for the Rocky Mountain Research Station have created a software package that simplifies and automates many of the processes needed for converting models into maps. This software package, called ModelMap, has helped a variety of specialists and land managers to quickly convert data into easily understood graphical images. The...

  15. SE Great Basin Play Fairway Analysis

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a Na/K geothermometer probability greater than 200 deg C map, as well as two play fairway analysis (PFA) models. The probability map acts as a composite risk segment for the PFA models. The PFA models differ in their application of magnetotelluric conductors as composite risk segments. These PFA models map out the geothermal potential in the region of SE Great Basin, Utah.

  16. Landscape patterns from mathematical morphology on maps with contagion

    Treesearch

    Kurt Riitters; Peter Vogt; Pierre Soille; Christine Estreguil

    2009-01-01

    The perceived realism of simulated maps with contagion (spatial autocorrelation) has led to their use for comparing landscape pattern metrics and as habitat maps for modeling organism movement across landscapes. The objective of this study was to conduct a neutral model analysis of pattern metrics defined by morphological spatial pattern analysis (MSPA) on maps with...

  17. Analyzing the Broken Ridge area of the Indian Ocean using magnetic and gravity anomaly maps and geoid undulation and bathymetry data

    NASA Technical Reports Server (NTRS)

    Lazarewicz, A. R.; Sailor, R. V. (Principal Investigator)

    1982-01-01

    A higher resolution anomaly map of the Broken Ridge area (2 degree dipole spacing) was produced and reduced to the pole using quiet time data for this area. The map was compared with equally scaled maps of gravity anomaly, geoid undulation, and bathymetry. The ESMAP results were compared with a NASA MAGSAT map derived by averaging data in two-degree bins. A survey simulation was developed to model the accuracy of MAGSAT anomaly maps as a function of satellite altitude, instrument noise level, external noise model, and crustal anomaly field model. A preliminary analysis of the geophysical structure of Broken Ridge is presented and unresolved questions are listed.

  18. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  19. An atomic model of brome mosaic virus using direct electron detection and real-space optimization.

    PubMed

    Wang, Zhao; Hryc, Corey F; Bammes, Benjamin; Afonine, Pavel V; Jakana, Joanita; Chen, Dong-Hua; Liu, Xiangan; Baker, Matthew L; Kao, Cheng; Ludtke, Steven J; Schmid, Michael F; Adams, Paul D; Chiu, Wah

    2014-09-04

    Advances in electron cryo-microscopy have enabled structure determination of macromolecules at near-atomic resolution. However, structure determination, even using de novo methods, remains susceptible to model bias and overfitting. Here we describe a complete workflow for data acquisition, image processing, all-atom modelling and validation of brome mosaic virus, an RNA virus. Data were collected with a direct electron detector in integrating mode and an exposure beyond the traditional radiation damage limit. The final density map has a resolution of 3.8 Å as assessed by two independent data sets and maps. We used the map to derive an all-atom model with a newly implemented real-space optimization protocol. The validity of the model was verified by its match with the density map and a previous model from X-ray crystallography, as well as the internal consistency of models from independent maps. This study demonstrates a practical approach to obtain a rigorously validated atomic resolution electron cryo-microscopy structure.

  20. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria

    2016-04-01

    The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.

  1. Mediterranean maquis fuel model development and mapping to support fire modeling

    NASA Astrophysics Data System (ADS)

    Bacciu, V.; Arca, B.; Pellizzaro, G.; Salis, M.; Ventura, A.; Spano, D.; Duce, P.

    2009-04-01

    Fuel load data and fuel model maps represent a critical issue for fire spread and behaviour modeling. The availability of accurate input data at different spatial and temporal scales can allow detailed analysis and predictions of fire hazard and fire effects across a landscape. Fuel model data are used in spatially explicit fire growth models to attain fire behaviour information for fuel management in prescribed fires, fire management applications, firefighters training, smoke emissions, etc. However, fuel type characteristics are difficult to be parameterized due to their complexity and variability: live and dead materials with different size contribute in different ways to the fire spread and behaviour. In the last decades, a strong help was provided by the use of remote sensing imagery at high spatial and spectral resolution. Such techniques are able to capture fine scale fuel distributions for accurate fire growth projections. Several attempts carried out in Europe were devoted to fuel classification and map characterization. In Italy, fuel load estimation and fuel model definition are still critical issues to be addressed due to the lack of detailed information. In this perspective, the aim of the present work was to propose an integrated approach based on field data collection, fuel model development and fuel model mapping to provide fuel models for the Mediterranean maquis associations. Field data needed for the development of fuel models were collected using destructive and non destructive measurements in experimental plots located in Northern Sardinia (Italy). Statistical tests were used to identify the main fuel types that were classified into four custom fuel models. Subsequently, a supervised classification by the Maximum Likelihood algorithm was applied on IKONOS images to identify and map the different types of maquis vegetation. The correspondent fuel model was then associated to each vegetation type to obtain the fuel model map. The results show the potential of this approach in achieving a reasonable accuracy in fuel model development and mapping; fine scale fuel model maps can be potentially helpful to obtain realistic predictions of fire behaviour and fire effects.

  2. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  3. Lunar and Vesta Web Portals

    NASA Astrophysics Data System (ADS)

    Law, E.; JPL Luna Mapping; Modeling Project Team

    2015-06-01

    The Lunar Mapping and Modeling Project offers Lunar Mapping and Modeling Portal (http://lmmp.nasa.gov) and Vesta Trek Portal (http://vestatrek.jpl.nasa.gov) providing interactive visualization and analysis tools to enable users to access mapped Lunar and Vesta data products.

  4. Two-component Thermal Dust Emission Model: Application to the Planck HFI Maps

    NASA Astrophysics Data System (ADS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2014-06-01

    We present full-sky, 6.1 arcminute resolution maps of dust optical depth and temperature derived by fitting the Finkbeiner et al. (1999) two-component dust emission model to the Planck HFI and IRAS 100 micron maps. This parametrization of the far infrared thermal dust SED as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody dust emission model. We expect our Planck-based maps of dust temperature and optical depth to form the basis for a next-generation, high-resolution extinction map which will additionally incorporate small-scale detail from WISE imaging.

  5. Integrating recent land cover mapping efforts to update the National Gap Analysis Program's species habitat map

    USGS Publications Warehouse

    McKerrow, Alexa; Davidson, A.; Earnhardt, Todd; Benson, Abigail L.; Toth, Charles; Holm, Thomas; Jutz, Boris

    2014-01-01

    Over the past decade, great progress has been made to develop national extent land cover mapping products to address natural resource issues. One of the core products of the GAP Program is range-wide species distribution models for nearly 2000 terrestrial vertebrate species in the U.S. We rely on deductive modeling of habitat affinities using these products to create models of habitat availability. That approach requires that we have a thematically rich and ecologically meaningful map legend to support the modeling effort. In this work, we tested the integration of the Multi-Resolution Landscape Characterization Consortium's National Land Cover Database 2011 and LANDFIRE's Disturbance Products to update the 2001 National GAP Vegetation Dataset to reflect 2011 conditions. The revised product can then be used to update the species models. We tested the update approach in three geographic areas (Northeast, Southeast, and Interior Northwest). We used the NLCD product to identify areas where the cover type mapped in 2011 was different from what was in the 2001 land cover map. We used Google Earth and ArcGIS base maps as reference imagery in order to label areas identified as "changed" to the appropriate class from our map legend. Areas mapped as urban or water in the 2011 NLCD map that were mapped differently in the 2001 GAP map were accepted without further validation and recoded to the corresponding GAP class. We used LANDFIRE's Disturbance products to identify changes that are the result of recent disturbance and to inform the reassignment of areas to their updated thematic label. We ran species habitat models for three species including Lewis's Woodpecker (Melanerpes lewis) and the White-tailed Jack Rabbit (Lepus townsendii) and Brown Headed nuthatch (Sitta pusilla). For each of three vertebrate species we found important differences in the amount and location of suitable habitat between the 2001 and 2011 habitat maps. Specifically, Brown headed nuthatch habitat in 2011 was −14% of the 2001 modeled habitat, whereas Lewis's Woodpecker increased by 4%. The white-tailed jack rabbit (Lepus townsendii) had a net change of −1% (11% decline, 10% gain). For that species we found the updates related to opening of forest due to burning and regenerating shrubs following harvest to be the locally important main transitions. In the Southeast updates related to timber management and urbanization are locally important.

  6. A Method of Surrogate Model Construction which Leverages Lower-Fidelity Information using Space Mapping Techniques

    DTIC Science & Technology

    2014-03-27

    fidelity. This pairing is accomplished through the use of a space mapping technique, which is a process where the design space of a lower fidelity model...is aligned a higher fidelity model. The intent of applying space mapping techniques to the field of surrogate construction is to leverage the

  7. InMAP: a new model for air pollution interventions

    NASA Astrophysics Data System (ADS)

    Tessum, C. W.; Hill, J. D.; Marshall, J. D.

    2015-10-01

    Mechanistic air pollution models are essential tools in air quality management. Widespread use of such models is hindered, however, by the extensive expertise or computational resources needed to run most models. Here, we present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations - the air pollution outcome generally causing the largest monetized health damages - attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical information from the output of a state-of-the-science chemical transport model (WRF-Chem) within an Eulerian modeling framework, to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. InMAP uses a variable resolution grid that focuses on human exposures by employing higher spatial resolution in urban areas and lower spatial resolution in rural and remote locations and in the upper atmosphere; and by directly calculating steady-state, annual average concentrations. In comparisons run here, InMAP recreates WRF-Chem predictions of changes in total PM2.5 concentrations with population-weighted mean fractional error (MFE) and bias (MFB) < 10 % and population-weighted R2 ~ 0.99. Among individual PM2.5 species, the best predictive performance is for primary PM2.5 (MFE: 16 %; MFB: 13 %) and the worst predictive performance is for particulate nitrate (MFE: 119 %; MFB: 106 %). Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. Features planned for future model releases include a larger spatial domain, more temporal information, and the ability to predict ground-level ozone (O3) concentrations. The InMAP model source code and input data are freely available online.

  8. A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits

    PubMed Central

    Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling

    2007-01-01

    Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431

  9. Ensemble of ground subsidence hazard maps using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  10. Use of slope, aspect, and elevation maps derived from digital elevation model data in making soil surveys

    USGS Publications Warehouse

    Klingebiel, A.A.; Horvath, E.H.; Moore, D.G.; Reybold, W.U.

    1987-01-01

    Maps showing different classes of slope, aspect, and elevation were developed from U.S. Geological Survey digital elevation model data. The classes were displayed on clear Mylar at 1:24 000-scale and registered with topographic maps and orthophotos. The maps were used with aerial photographs, topographic maps, and other resource data to determine their value in making order-three soil surveys. They were tested on over 600 000 ha in Wyoming, Idaho, and Nevada under various climatic and topographic conditions. Field evaluations showed that the maps developed from digital elevation model data were accurate, except for slope class maps where slopes were <4%. The maps were useful to soil scientists, especially where (i) class boundaries coincided with soil changes, landform delineations, land use and management separations, and vegetation changes, and (ii) rough terrain and dense vegetation made it difficult to traverse the area. In hot, arid areas of sparse vegetation, the relationship of slope classes to kinds of soil and vegetation was less significant.

  11. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  12. Temporal expansion of annual crop classification layers for the CONUS using the C5 decision tree classifier

    USGS Publications Warehouse

    Friesz, Aaron M.; Wylie, Bruce K.; Howard, Daniel M.

    2017-01-01

    Crop cover maps have become widely used in a range of research applications. Multiple crop cover maps have been developed to suite particular research interests. The National Agricultural Statistics Service (NASS) Cropland Data Layers (CDL) are a series of commonly used crop cover maps for the conterminous United States (CONUS) that span from 2008 to 2013. In this investigation, we sought to contribute to the availability of consistent CONUS crop cover maps by extending temporal coverage of the NASS CDL archive back eight additional years to 2000 by creating annual NASS CDL-like crop cover maps derived from a classification tree model algorithm. We used over 11 million records to train a classification tree algorithm and develop a crop classification model (CCM). The model was used to create crop cover maps for the CONUS for years 2000–2013 at 250 m spatial resolution. The CCM and the maps for years 2008–2013 were assessed for accuracy relative to resampled NASS CDLs. The CCM performed well against a withheld test data set with a model prediction accuracy of over 90%. The assessment of the crop cover maps indicated that the model performed well spatially, placing crop cover pixels within their known domains; however, the model did show a bias towards the ‘Other’ crop cover class, which caused frequent misclassifications of pixels around the periphery of large crop cover patch clusters and of pixels that form small, sparsely dispersed crop cover patches.

  13. [Modeling developmental aspects of sensorimotor control of speech production].

    PubMed

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  14. Cartographic Modeling: Computer-assisted Analysis of Spatially Defined Neighborhoods

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Tomlin, C. D.

    1982-01-01

    Cartographic models addressing a wide variety of applications are composed of fundamental map processing operations. These primitive operations are neither data base nor application-specific. By organizing the set of operations into a mathematical-like structure, the basis for a generalized cartographic modeling framework can be developed. Among the major classes of primitive operations are those associated with reclassifying map categories, overlaying maps, determining distance and connectivity, and characterizing cartographic neighborhoods. The conceptual framework of cartographic modeling is established and techniques for characterizing neighborhoods are used as a means of demonstrating some of the more sophisticated procedures of computer-assisted map analysis. A cartographic model for assessing effective roundwood supply is briefly described as an example of a computer analysis. Most of the techniques described have been implemented as part of the map analysis package developed at the Yale School of Forestry and Environmental Studies.

  15. Lyman-α Models for LRO LAMP from MESSENGER MASCS and SOHO SWAN Data

    NASA Astrophysics Data System (ADS)

    Pryor, Wayne R.; Holsclaw, Gregory M.; McClintock, William E.; Snow, Martin; Vervack, Ronald J.; Gladstone, G. Randall; Stern, S. Alan; Retherford, Kurt D.; Miles, Paul F.

    From models of the interplanetary Lyman-α glow derived from observations by the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) interplanetary Lyman-α data obtained in 2009-2011 on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft mission, daily all-sky Lyman-α maps were generated for use by the Lunar Reconnaissance Orbiter (LRO) LAMP Lyman-Alpha Mapping Project (LAMP) experiment. These models were then compared with Solar and Heliospheric Observatory (SOHO) Solar Wind ANistropy (SWAN) Lyman-α maps when available. Although the empirical agreement across the sky between the scaled model and the SWAN maps is adequate for LAMP mapping purposes, the model brightness values best agree with the SWAN values in 2008 and 2009. SWAN's observations show a systematic decline in 2010 and 2011 relative to the model. It is not clear if the decline represents a failure of the model or a decline in sensitivity in SWAN in 2010 and 2011. MESSENGER MASCS and SOHO SWAN Lyman-α calibrations systematically differ in comparison with the model, with MASCS reporting Lyman-α values some 30 % lower than SWAN.

  16. Malaria Disease Mapping in Malaysia based on Besag-York-Mollie (BYM) Model

    NASA Astrophysics Data System (ADS)

    Azah Samat, Nor; Mey, Liew Wan

    2017-09-01

    Disease mapping is the visual representation of the geographical distribution which give an overview info about the incidence of disease within a population through spatial epidemiology data. Based on the result of map, it helps in monitoring and planning resource needs at all levels of health care and designing appropriate interventions, tailored towards areas that deserve closer scrutiny or communities that lead to further investigations to identify important risk factors. Therefore, the choice of statistical model used for relative risk estimation is important because production of disease risk map relies on the model used. This paper proposes Besag-York-Mollie (BYM) model to estimate the relative risk for Malaria in Malaysia. The analysis involved using the number of Malaria cases that obtained from the Ministry of Health Malaysia. The outcomes of analysis are displayed through graph and map, including Malaria disease risk map that constructed according to the estimation of relative risk. The distribution of high and low risk areas of Malaria disease occurrences for all states in Malaysia can be identified in the risk map.

  17. Subsite mapping of enzymes. Application of the depolymerase computer model to two alpha-amylases.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    In the preceding paper (Allen and Thoma, 1976) we developed a depolymerase computer model, which uses a minimization routine to establish a subsite map for a depolymerase. In the present paper we show how the model is applied to experimental data for two alpha-amylases. Michaelis parameters and bond-cleavage frequencies for substrates of chain lengths up to twelve glucosyl units have been reported for Bacillus amyloliquefaciens, and a subsite map has been proposed for this enzyme [Thoma et al. (1971) J. Biol. Chem. 246, 5621-5635]. By applying the computer model to the experimental data, we have arrived at a ten-subsite map. We find that a significant improvement in this map is achieved by allowing the hydrolytic rate coefficient to vary as a function of the number of occupied subsites comprising the enzyme-binding region. The bond-cleavage frequencies, the enzyme is found to have eight subsites. A partial subsite map is arrived at, but the entire binding region cannot be mapped because Michaelis parameters are complicated by transglycosylation reactions. The hydrolytic rate coefficients for this enzyme are not constant. PMID:999630

  18. Building Interoperable FHIR-Based Vocabulary Mapping Services: A Case Study of OHDSI Vocabularies and Mappings.

    PubMed

    Jiang, Guoqian; Kiefer, Richard; Prud'hommeaux, Eric; Solbrig, Harold R

    2017-01-01

    The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.

  19. Near-real-time simulation and internet-based delivery of forecast-flood inundation maps using two-dimensional hydraulic modeling--A pilot study for the Snoqualmie River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.

    2002-01-01

    A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.

  20. Full-sky, High-resolution Maps of Interstellar Dust

    NASA Astrophysics Data System (ADS)

    Meisner, Aaron Michael

    We present full-sky, high-resolution maps of interstellar dust based on data from the Wide-field Infrared Survey Explorer (WISE) and Planck missions. We describe our custom processing of the entire WISE 12 micron All-Sky imaging data set, and present the resulting 15 arcsecond resolution, full-sky map of diffuse Galactic dust emission, free of compact sources and other contaminating artifacts. Our derived 12 micron dust map offers angular resolution far superior to that of all other existing full-sky, infrared dust emission maps, revealing a wealth of small-scale filamentary structure. We also apply the Finkbeiner et al. (1999) two-component thermal dust emission model to the Planck HFI maps. We derive full-sky 6.1 arcminute resolution maps of dust optical depth and temperature by fitting this two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 micron data. In doing so, we obtain the first ever full-sky 100-3000 GHz Planck-based thermal dust emission model, as well as a dust temperature correction with ~10 times enhanced angular resolution relative to DIRBE-based temperature maps. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. (1999) based on FIRAS and DIRBE. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration (2013) single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales. Future work will focus on combining our WISE 12 micron dust map and Planck dust model to create a next-generation, full-sky dust extinction map with angular resolution several times better than Schlegel et al. (1998).

  1. The evolution of mapping habitat for northern spotted owls (Strix occidentalis caurina): A comparison of photo-interpreted, Landsat-based, and lidar-based habitat maps

    USGS Publications Warehouse

    Ackers, Steven H.; Davis, Raymond J.; Olsen, K.; Dugger, Catherine

    2015-01-01

    Wildlife habitat mapping has evolved at a rapid pace over the last few decades. Beginning with simple, often subjective, hand-drawn maps, habitat mapping now involves complex species distribution models (SDMs) using mapped predictor variables derived from remotely sensed data. For species that inhabit large geographic areas, remote sensing technology is often essential for producing range wide maps. Habitat monitoring for northern spotted owls (Strix occidentalis caurina), whose geographic covers about 23 million ha, is based on SDMs that use Landsat Thematic Mapper imagery to create forest vegetation data layers using gradient nearest neighbor (GNN) methods. Vegetation data layers derived from GNN are modeled relationships between forest inventory plot data, climate and topographic data, and the spectral signatures acquired by the satellite. When used as predictor variables for SDMs, there is some transference of the GNN modeling error to the final habitat map.Recent increases in the use of light detection and ranging (lidar) data, coupled with the need to produce spatially accurate and detailed forest vegetation maps have spurred interest in its use for SDMs and habitat mapping. Instead of modeling predictor variables from remotely sensed spectral data, lidar provides direct measurements of vegetation height for use in SDMs. We expect a SDM habitat map produced from directly measured predictor variables to be more accurate than one produced from modeled predictors.We used maximum entropy (Maxent) SDM modeling software to compare predictive performance and estimates of habitat area between Landsat-based and lidar-based northern spotted owl SDMs and habitat maps. We explored the differences and similarities between these maps, and to a pre-existing aerial photo-interpreted habitat map produced by local wildlife biologists. The lidar-based map had the highest predictive performance based on 10 bootstrapped replicate models (AUC = 0.809 ± 0.011), but the performance of the Landsat-based map was within acceptable limits (AUC = 0.717 ± 0.021). As is common with photo-interpreted maps, there was no accuracy assessment available for comparison. The photo-interpreted map produced the highest and lowest estimates of habitat area, depending on which habitat classes were included (nesting, roosting, and foraging habitat = 9962 ha, nesting habitat only = 6036 ha). The Landsat-based map produced an estimate of habitat area that was within this range (95% CI: 6679–9592 ha), while the lidar-based map produced an area estimate similar to what was interpreted by local wildlife biologists as nesting (i.e., high quality) habitat using aerial imagery (95% CI: 5453–7216). Confidence intervals of habitat area estimates from the SDMs based on Landsat and lidar overlapped.We concluded that both Landsat- and lidar-based SDMs produced reasonable maps and area estimates for northern spotted owl habitat within the study area. The lidar-based map was more precise and spatially similar to what local wildlife biologists considered spotted owl nesting habitat. The Landsat-based map provided a less precise spatial representation of habitat within the relatively small geographic confines of the study area, but habitat area estimates were similar to both the photo-interpreted and lidar-based maps.Photo-interpreted maps are time consuming to produce, subjective in nature, and difficult to replicate. SDMs provide a framework for efficiently producing habitat maps that can be replicated as habitat conditions change over time, provided that comparable remotely sensed data are available. When the SDM uses predictor variables extracted from lidar data, it can produce a habitat map that is both accurate and useful at large and small spatial scales. In comparison, SDMs using Landsat-based data are more appropriate for large scale analyses of amounts and general spatial patterns of habitat at regional scales.

  2. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  3. Comparative mapping of Pluto's sub-Charon hemisphere - Three least squares models based on mutual event lightcurves

    NASA Technical Reports Server (NTRS)

    Young, Eliot F.; Binzel, Richard P.

    1993-01-01

    Observations of Charon transits are used here to derive preliminary maps of Pluto's sub-Charon hemisphere. Three models are used to describe the brightness of Pluto's surface as functions of latitude and longitude. Mapping results are presented using spherical harmonic functions, polynomial functions, and finite elements. A smoothing algorithm applied to the maps is described and the validity and resolution of the maps is tested by reconstruction from synthetic data. A preliminary finding from the maps is that the south polar region has the highest albedo of any location on the planet.

  4. A radiation hybrid map of the European sea bass (Dicentrarchus labrax) based on 1581 markers: Synteny analysis with model fish genomes.

    PubMed

    Guyon, Richard; Senger, Fabrice; Rakotomanga, Michaelle; Sadequi, Naoual; Volckaert, Filip A M; Hitte, Christophe; Galibert, Francis

    2010-10-01

    The selective breeding of fish for aquaculture purposes requires the understanding of the genetic basis of traits such as growth, behaviour, resistance to pathogens and sex determinism. Access to well-developed genomic resources is a prerequisite to improve the knowledge of these traits. Having this aim in mind, a radiation hybrid (RH) panel of European sea bass (Dicentrarchus labrax) was constructed from splenocytes irradiated at 3000 rad, allowing the construction of a 1581 marker RH map. A total of 1440 gene markers providing ~4400 anchors with the genomes of three-spined stickleback, medaka, pufferfish and zebrafish, helped establish synteny relationships with these model species. The identification of Conserved Segments Ordered (CSO) between sea bass and model species allows the anticipation of the position of any sea bass gene from its location in model genomes. Synteny relationships between sea bass and gilthead seabream were addressed by mapping 37 orthologous markers. The sea bass genetic linkage map was integrated in the RH map through the mapping of 141 microsatellites. We are thus able to present the first complete gene map of sea bass. It will facilitate linkage studies and the identification of candidate genes and Quantitative Trait Loci (QTL). The RH map further positions sea bass as a genetic and evolutionary model of Perciformes and supports their ongoing aquaculture expansion. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. LiDAR-Derived Flood-Inundation Maps for Real-Time Flood-Mapping Applications, Tar River Basin, North Carolina

    USGS Publications Warehouse

    Bales, Jerad D.; Wagner, Chad R.; Tighe, Kirsten C.; Terziotti, Silvia

    2007-01-01

    Flood-inundation maps were created for selected streamgage sites in the North Carolina Tar River basin. Light detection and ranging (LiDAR) data with a vertical accuracy of about 20 centimeters, provided by the Floodplain Mapping Information System of the North Carolina Floodplain Mapping Program, were processed to produce topographic data for the inundation maps. Bare-earth mass point LiDAR data were reprocessed into a digital elevation model with regularly spaced 1.5-meter by 1.5-meter cells. A tool was developed as part of this project to connect flow paths, or streams, that were inappropriately disconnected in the digital elevation model by such features as a bridge or road crossing. The Hydraulic Engineering Center-River Analysis System (HEC-RAS) model, developed by the U.S. Army Corps of Engineers, was used for hydraulic modeling at each of the study sites. Eleven individual hydraulic models were developed for the Tar River basin sites. Seven models were developed for reaches with a single gage, and four models were developed for reaches of the Tar River main stem that receive flow from major gaged tributaries, or reaches in which multiple gages were near one another. Combined, the Tar River hydraulic models included 272 kilometers of streams in the basin, including about 162 kilometers on the Tar River main stem. The hydraulic models were calibrated to the most current stage-discharge relations at 11 long-term streamgages where rating curves were available. Medium- to high-flow discharge measurements were made at some of the sites without rating curves, and high-water marks from Hurricanes Fran and Floyd were available for high-stage calibration. Simulated rating curves matched measured curves over the full range of flows. Differences between measured and simulated water levels for a specified flow were no more than 0.44 meter and typically were less. The calibrated models were used to generate a set of water-surface profiles for each of the 11 modeled reaches at 0.305-meter increments for water levels ranging from bankfull to approximately the highest recorded water level at the downstream-most gage in each modeled reach. Inundated areas were identified by subtracting the water-surface elevation in each 1.5-meter by 1.5-meter grid cell from the land-surface elevation in the cell through an automated routine that was developed to identify all inundated cells hydraulically connected to the cell at the downstream-most gage in the model domain. Inundation maps showing transportation networks and orthoimagery were prepared for display on the Internet. These maps also are linked to the U.S. Geological Survey North Carolina Water Science Center real-time streamflow website. Hence, a user can determine the near real-time stage and water-surface elevation at a U.S. Geological Survey streamgage site in the Tar River basin and link directly to the flood-inundation maps for a depiction of the estimated inundated area at the current water level. Although the flood-inundation maps represent distinct boundaries of inundated areas, some uncertainties are associated with these maps. These are uncertainties in the topographic data for the hydraulic model computational grid and inundation maps, effective friction values (Manning's n), model-validation data, and forecast hydrographs, if used. The Tar River flood-inundation maps were developed by using a steady-flow hydraulic model. This assumption clearly has less of an effect on inundation maps produced for low flows than for high flows when it typically takes more time to inundate areas. A flood in which water levels peak and fall slowly most likely will result in more inundation than a similar flood in which water levels peak and fall quickly. Limitations associated with the steady-flow assumption for hydraulic modeling vary from site to site. The one-dimensional modeling approach used in this study resulted in good agreement between measurements and simulations. T

  6. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  7. BatSLAM: Simultaneous localization and mapping using biomimetic sonar.

    PubMed

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.

  8. BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar

    PubMed Central

    Steckel, Jan; Peremans, Herbert

    2013-01-01

    We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647

  9. Lunar Terrain and Albedo Reconstruction from Apollo Imagery

    NASA Technical Reports Server (NTRS)

    Nefian, Ara V.; Kim, Taemin; Broxton, Michael; Moratto, Zach

    2010-01-01

    Generating accurate three dimensional planetary models and albedo maps is becoming increasingly more important as NASA plans more robotics missions to the Moon in the coming years. This paper describes a novel approach for separation of topography and albedo maps from orbital Lunar images. Our method uses an optimal Bayesian correlator to refine the stereo disparity map and generate a set of accurate digital elevation models (DEM). The albedo maps are obtained using a multi-image formation model that relies on the derived DEMs and the Lunar- Lambert reflectance model. The method is demonstrated on a set of high resolution scanned images from the Apollo era missions.

  10. Hysteresis compensation of the Prandtl-Ishlinskii model for piezoelectric actuators using modified particle swarm optimization with chaotic map.

    PubMed

    Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua

    2017-07-01

    Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.

  11. The Voronoi spatio-temporal data structure

    NASA Astrophysics Data System (ADS)

    Mioc, Darka

    2002-04-01

    Current GIS models cannot integrate the temporal dimension of spatial data easily. Indeed, current GISs do not support incremental (local) addition and deletion of spatial objects, and they can not support the temporal evolution of spatial data. Spatio-temporal facilities would be very useful in many GIS applications: harvesting and forest planning, cadastre, urban and regional planning, and emergency planning. The spatio-temporal model that can overcome these problems is based on a topological model---the Voronoi data structure. Voronoi diagrams are irregular tessellations of space, that adapt to spatial objects and therefore they are a synthesis of raster and vector spatial data models. The main advantage of the Voronoi data structure is its local and sequential map updates, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define the complex operations. This resulted in a new formal model for spatio-temporal change representation, where each update is uniquely characterized by the numbers of newly created and inactivated Voronoi regions. This is used for the extension of the model towards the hierarchical Voronoi data structure. In this model, spatio-temporal changes induced by map updates are preserved in a hierarchical data structure that combines events and corresponding changes in topology. This hierarchical Voronoi data structure has an implicit time ordering of events visible through changes in topology, and it is equivalent to an event structure that can support temporal data without precise temporal information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.

  12. Manifestation of a neuro-fuzzy model to produce landslide susceptibility map using remote sensing data derived parameters

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred

    Landslides are the most common natural hazards in Malaysia. Preparation of landslide suscep-tibility maps is important for engineering geologists and geomorphologists. However, due to complex nature of landslides, producing a reliable susceptibility map is not easy. In this study, a new attempt is tried to produce landslide susceptibility map of a part of Cameron Valley of Malaysia. This paper develops an adaptive neuro-fuzzy inference system (ANFIS) based on a geographic information system (GIS) environment for landslide susceptibility mapping. To ob-tain the neuro-fuzzy relations for producing the landslide susceptibility map, landslide locations were identified from interpretation of aerial photographs and high resolution satellite images, field surveys and historical inventory reports. Landslide conditioning factors such as slope, plan curvature, distance to drainage lines, soil texture, lithology, and distance to lineament were extracted from topographic, soil, and lineament maps. Landslide susceptible areas were analyzed by the ANFIS model and mapped using the conditioning factors. Furthermore, we applied various membership functions (MFs) and fuzzy relations to produce landslide suscep-tibility maps. The prediction performance of the susceptibility map is checked by considering actual landslides in the study area. Results show that, triangular, trapezoidal, and polynomial MFs were the best individual MFs for modelling landslide susceptibility maps (86

  13. Linking biophysical models and public preferences for ecosystem service assessments: a case study for the Southern Rocky Mountains

    USGS Publications Warehouse

    Bagstad, Kenneth J.; Reed, James; Semmens, Darius J.; Sherrouse, Ben C.; Troy, Austin

    2016-01-01

    Through extensive research, ecosystem services have been mapped using both survey-based and biophysical approaches, but comparative mapping of public values and those quantified using models has been lacking. In this paper, we mapped hot and cold spots for perceived and modeled ecosystem services by synthesizing results from a social-values mapping study of residents living near the Pike–San Isabel National Forest (PSI), located in the Southern Rocky Mountains, with corresponding biophysically modeled ecosystem services. Social-value maps for the PSI were developed using the Social Values for Ecosystem Services tool, providing statistically modeled continuous value surfaces for 12 value types, including aesthetic, biodiversity, and life-sustaining values. Biophysically modeled maps of carbon sequestration and storage, scenic viewsheds, sediment regulation, and water yield were generated using the Artificial Intelligence for Ecosystem Services tool. Hotspots for both perceived and modeled services were disproportionately located within the PSI’s wilderness areas. Additionally, we used regression analysis to evaluate spatial relationships between perceived biodiversity and cultural ecosystem services and corresponding biophysical model outputs. Our goal was to determine whether publicly valued locations for aesthetic, biodiversity, and life-sustaining values relate meaningfully to results from corresponding biophysical ecosystem service models. We found weak relationships between perceived and biophysically modeled services, indicating that public perception of ecosystem service provisioning regions is limited. We believe that biophysical and social approaches to ecosystem service mapping can serve as methodological complements that can advance ecosystem services-based resource management, benefitting resource managers by showing potential locations of synergy or conflict between areas supplying ecosystem services and those valued by the public.

  14. Validation Workshop of the DRDC Concept Map Knowledge Model: Issues in Intelligence Analysis

    DTIC Science & Technology

    2010-06-29

    group noted problems with grammar , and a more standard approach to the grammar of the linking term (e.g. use only active tense ) would certainly have...Knowledge Model is distinct from a Concept Map. A Concept Map is a single map, probably presented in one view, while a Knowledge Model is a set of...Agenda The workshop followed the agenda presented in Table 2-3. Table 2-3: Workshop Agenda Time Title 13:00 – 13:15 Registration 13:15 – 13:45

  15. Charge to Road Map Development Sessions

    NASA Technical Reports Server (NTRS)

    Barth, Janet

    2004-01-01

    Develop a road map for new standard Model applications radiation belt models. Model applications: Spacecraft and instruments. Reduce risk. Reduce cost. Improve performance. Increase system lifetime. Reduce risk to astronauts.

  16. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.

  17. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  18. A trace map comparison algorithm for the discrete fracture network models of rock masses

    NASA Astrophysics Data System (ADS)

    Han, Shuai; Wang, Gang; Li, Mingchao

    2018-06-01

    Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.

  19. Statistical mapping of count survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.

    2002-01-01

    We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.

  20. Accounting for Errors in Low Coverage High-Throughput Sequencing Data When Constructing Genetic Maps Using Biparental Outcrossed Populations

    PubMed Central

    Bilton, Timothy P.; Schofield, Matthew R.; Black, Michael A.; Chagné, David; Wilcox, Phillip L.; Dodds, Ken G.

    2018-01-01

    Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species’ genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology (e.g., genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander–Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. PMID:29487138

  1. Accounting for Errors in Low Coverage High-Throughput Sequencing Data When Constructing Genetic Maps Using Biparental Outcrossed Populations.

    PubMed

    Bilton, Timothy P; Schofield, Matthew R; Black, Michael A; Chagné, David; Wilcox, Phillip L; Dodds, Ken G

    2018-05-01

    Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species' genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology ( e.g. , genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander-Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. Copyright © 2018 Bilton et al.

  2. Locally Contractive Dynamics in Generalized Integrate-and-Fire Neurons*

    PubMed Central

    Jimenez, Nicolas D.; Mihalas, Stefan; Brown, Richard; Niebur, Ernst; Rubin, Jonathan

    2013-01-01

    Integrate-and-fire models of biological neurons combine differential equations with discrete spike events. In the simplest case, the reset of the neuronal voltage to its resting value is the only spike event. The response of such a model to constant input injection is limited to tonic spiking. We here study a generalized model in which two simple spike-induced currents are added. We show that this neuron exhibits not only tonic spiking at various frequencies but also the commonly observed neuronal bursting. Using analytical and numerical approaches, we show that this model can be reduced to a one-dimensional map of the adaptation variable and that this map is locally contractive over a broad set of parameter values. We derive a sufficient analytical condition on the parameters for the map to be globally contractive, in which case all orbits tend to a tonic spiking state determined by the fixed point of the return map. We then show that bursting is caused by a discontinuity in the return map, in which case the map is piecewise contractive. We perform a detailed analysis of a class of piecewise contractive maps that we call bursting maps and show that they robustly generate stable bursting behavior. To the best of our knowledge, this work is the first to point out the intimate connection between bursting dynamics and piecewise contractive maps. Finally, we discuss bifurcations in this return map, which cause transitions between spiking patterns. PMID:24489486

  3. Model-based local density sharpening of cryo-EM maps

    PubMed Central

    Jakobi, Arjen J; Wilmanns, Matthias

    2017-01-01

    Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676

  4. Accurate model annotation of a near-atomic resolution cryo-EM map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo- EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structuralmore » features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.« less

  5. Comparison of simulation modeling and satellite techniques for monitoring ecological processes

    NASA Technical Reports Server (NTRS)

    Box, Elgene O.

    1988-01-01

    In 1985 improvements were made in the world climatic data base for modeling and predictive mapping; in individual process models and the overall carbon-balance models; and in the interface software for mapping the simulation results. Statistical analysis of the data base was begun. In 1986 mapping was shifted to NASA-Goddard. The initial approach involving pattern comparisons was modified to a more statistical approach. A major accomplishment was the expansion and improvement of a global data base of measurements of biomass and primary production, to complement the simulation data. The main accomplishments during 1987 included: production of a master tape with all environmental and satellite data and model results for the 1600 sites; development of a complete mapping system used for the initial color maps comparing annual and monthly patterns of Normalized Difference Vegetation Index (NDVI), actual evapotranspiration, net primary productivity, gross primary productivity, and net ecosystem production; collection of more biosphere measurements for eventual improvement of the biological models; and development of some initial monthly models for primary productivity, based on satellite data.

  6. Accurate model annotation of a near-atomic resolution cryo-EM map.

    PubMed

    Hryc, Corey F; Chen, Dong-Hua; Afonine, Pavel V; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D; King, Jonathan A; Schmid, Michael F; Chiu, Wah

    2017-03-21

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.

  7. Accurate model annotation of a near-atomic resolution cryo-EM map

    PubMed Central

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D.; King, Jonathan A.; Schmid, Michael F.; Chiu, Wah

    2017-01-01

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages. PMID:28270620

  8. Accurate model annotation of a near-atomic resolution cryo-EM map

    DOE PAGES

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; ...

    2017-03-07

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo- EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structuralmore » features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.« less

  9. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  10. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  11. Modeling the occurrence of Mycobacterium avium subsp. paratuberculosis in bulk raw milk and the impact of management options for exposure mitigation.

    PubMed

    Boulais, Christophe; Wacker, Ron; Augustin, Jean-Christophe; Cheikh, Mohamed Hedi Ben; Peladan, Fabrice

    2011-07-01

    Mycobacterium avium subsp. paratuberculosis (MAP) is the causal agent of paratuberculosis (Johne's disease) in cattle and other farm ruminants. The potential role of MAP in Crohn's disease in humans and the contribution of dairy products to human exposure to MAP continue to be the subject of scientific debate. The occurrence of MAP in bulk raw milk from dairy herds was assessed using a stochastic modeling approach. Raw milk samples were collected from bulk tanks in dairy plants and tested for the presence of MAP. Results from this analytical screening were used in a Bayesian network to update the model prediction. Of the 83 raw milk samples tested, 4 were positive for MAP by culture and PCR. We estimated that the level of MAP in bulk tanks ranged from 0 CFU/ml for the 2.5th percentile to 65 CFU/ml for the 97.5th percentile, with 95% credibility intervals of [0, 0] and [16, 326], respectively. The model was used to evaluate the effect of measures aimed at reducing the occurrence of MAP in raw milk. Reducing the prevalence of paratuberculosis has less of an effect on the occurrence of MAP in bulk raw milk than does managing clinically infected animals through good farming practices. Copyright ©, International Association for Food Protection

  12. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  13. Novice to Expert Cognition During Geologic Bedrock Mapping

    NASA Astrophysics Data System (ADS)

    Petcovic, H. L.; Libarkin, J.; Hambrick, D. Z.; Baker, K. M.; Elkins, J. T.; Callahan, C. N.; Turner, S.; Rench, T. A.; LaDue, N.

    2011-12-01

    Bedrock geologic mapping is a complex and cognitively demanding task. Successful mapping requires domain-specific content knowledge, visuospatial ability, navigation through the field area, creating a mental model of the geology that is consistent with field data, and metacognition. Most post-secondary geology students in the United States receive training in geologic mapping, however, not much is known about the cognitive processes that underlie successful bedrock mapping, or about how these processes change with education and experience. To better understand cognition during geologic mapping, we conducted a 2-year research study in which 67 volunteers representing a range from undergraduate sophomore to 20+ years professional experience completed a suite of cognitive measures plus a 1-day bedrock mapping task in the Rocky Mountains, Montana, USA. In addition to participants' geologic maps and field notes, the cognitive suite included tests and questionnaires designed to measure: (1) prior geologic experience, via a self-report survey; (2) geologic content knowledge, via a modified version of the Geoscience Concept Inventory; (3) visuospatial ability, working memory capacity, and perceptual speed, via paper-and-pencil and computerized tests; (4) use of space and time during mapping via GPS tracking; and (5) problem-solving in the field via think-aloud audio logs during mapping and post-mapping semi-structured interviews. Data were examined for correlations between performance on the mapping task and other measures. We found that both geological knowledge and spatial visualization ability correlated positively with accuracy in the field mapping task. More importantly, we found a Visuospatial Ability × Geological Knowledge interaction, such that visuospatial ability positively predicted mapping performance at low, but not high, levels of geological knowledge. In other words, we found evidence to suggest that visuospatial ability mattered for bedrock mapping for the novices in our sample, but not for the experts. For experienced mappers, we found a significant correlation between GCI scores and the thoroughness with which they covered the map area, plus a relationship between speed and map accuracy such that faster mappers produced better maps. However, fast novice mappers tended to produce the worst maps. Successful mappers formed a mental model of the underlying geologic structure immediately to early in the mapping task, then spent field time collecting observations to confirm, disconfirm, or modify their initial model. In contrast, the least successful mappers (all inexperienced) rarely generated explanations or models of the underlying geologic structure in the field.

  14. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    NASA Astrophysics Data System (ADS)

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic models.

  15. The influence of uncertain map features on risk beliefs and perceived ambiguity for maps of modeled cancer risk from air pollution

    PubMed Central

    Myers, Jeffrey D.

    2012-01-01

    Maps are often used to convey information generated by models, for example, modeled cancer risk from air pollution. The concrete nature of images, such as maps, may convey more certainty than warranted for modeled information. Three map features were selected to communicate the uncertainty of modeled cancer risk: (a) map contours appeared in or out of focus, (b) one or three colors were used, and (c) a verbal-relative or numeric risk expression was used in the legend. Study aims were to assess how these features influenced risk beliefs and the ambiguity of risk beliefs at four assigned map locations that varied by risk level. We applied an integrated conceptual framework to conduct this full factorial experiment with 32 maps that varied by the three dichotomous features and four risk levels; 826 university students participated. Data was analyzed using structural equation modeling. Unfocused contours and the verbal-relative risk expression generated more ambiguity than their counterparts. Focused contours generated stronger risk beliefs for higher risk levels and weaker beliefs for lower risk levels. Number of colors had minimal influence. The magnitude of risk level, conveyed using incrementally darker shading, had a substantial dose-response influence on the strength of risk beliefs. Personal characteristics of prior beliefs and numeracy also had substantial influences. Bottom-up and top-down information processing suggest why iconic visual features of incremental shading and contour focus had the strongest visual influences on risk beliefs and ambiguity. Variations in contour focus and risk expression show promise for fostering appropriate levels of ambiguity. PMID:22985196

  16. A new strategy for developing Vs30 maps

    USGS Publications Warehouse

    Wald, David J.; McWhirter, Leslie; Thompson, Eric; Hering, Amanda S.

    2011-01-01

    Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30m (Vs30) is useful and widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). Local, regional, and global Vs30 maps thus have diverse and fundamental uses in earthquake and engineering seismology. As such, we are developing an improved strategy for producing Vs30 maps given the common observational constraints available in any region for various spatial scales. We investigate a hierarchical approach to mapping Vs30, where the baseline model is derived from topographic slope because it is available globally, but geological maps and Vs30 observations contribute, where available. Using the abundant measured Vs30 values in Taiwan as an example, we analyze Vs30 versus slope per geologic unit and observe minor trends that indicate potential interaction of geologic and slope terms. We then regress Vs30 for the geologic Vs30 medians, topographic-slope, and cross-term coefficients for a hybrid model. The residuals of this hybrid model still exhibit a strong spatial correlation structure, so we use the kriging-with-a-trend method (the trend is the hybrid model) to further refine the Vs30 map so as to honor the Vs30 observations. Unlike the geology or slope models alone, this strategytakes advantage of the predictive capabilities of the two models, yet effectively defaults to ordinary kriging in the vicinity of the observed data, thereby achieving consistency with the observed data.

  17. Geological maps and models: are we certain how uncertain they are?

    NASA Astrophysics Data System (ADS)

    Mathers, Steve; Waters, Colin; McEvoy, Fiona

    2014-05-01

    Geological maps and latterly 3D models provide the spatial framework for geology at diverse scales or resolutions. As demands continue to rise for sustainable use of the subsurface, use of these maps and models is informing decisions on management of natural resources, hazards and environmental change. Inaccuracies and uncertainties in geological maps and models can impact substantially on the perception, assessment and management of opportunities and the associated risks . Lithostratigraphical classification schemes predominate, and are used in most geological mapping and modelling. The definition of unit boundaries, as 2D lines or 3D surfaces is the prime objective. The intervening area or volume is rarely described other than by its bulk attributes, those relating to the whole unit. Where sufficient data exist on the spatial and/or statistical distribution of properties it can be gridded or voxelated with integrity. Here we only discuss the uncertainty involved in defining the boundary conditions. The primary uncertainty of any geological map or model is the accuracy of the geological boundaries, i.e. tops, bases, limits, fault intersections etc. Traditionally these have been depicted on BGS maps using three line styles that reflect the uncertainty of the boundary, e.g. observed, inferred, conjectural. Most geological maps tend to neglect the subsurface expression (subcrops etc). Models could also be built with subsurface geological boundaries (as digital node strings) tagged with levels of uncertainty; initial experience suggests three levels may again be practicable. Once tagged these values could be used to autogenerate uncertainty plots. Whilst maps are predominantly explicit and based upon evidence and the conceptual the understanding of the geologist, models of this type are less common and tend to be restricted to certain software methodologies. Many modelling packages are implicit, being driven by simple statistical interpolation or complex algorithms for building surfaces in ways that are invisible and so not controlled by the working geologist. Such models have the advantage of being replicable within a software package and so can discount some interpretational differences between modellers. They can however create geologically implausible results unless good geological rules and control are established prior to model calculation. Comparisons of results from varied software packages yield surprisingly diverse results. This is a significant and often overlooked source of uncertainty in models. Expert elicitation is commonly employed to establish values used in statistical treatments of model uncertainty. However this introduces another possible source of uncertainty created by the different judgements of the modellers. The pragmatic solution appears to be using panels of experienced geologists to elicit the values. Treatments of uncertainty in maps and models yield relative rather than absolute values even though many of these are expressed numerically. This makes it extremely difficult to devise standard methodologies to determine uncertainty or propose fixed numerical scales for expressing the results. Furthermore, these may give a misleading impression of greater certainty than actually exists. This contribution outlines general perceptions with regard to uncertainty in our maps and models and presents results from recent BGS studies

  18. POSTERIOR PREDICTIVE MODEL CHECKS FOR DISEASE MAPPING MODELS. (R827257)

    EPA Science Inventory

    Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population si...

  19. A Electronic Map Data Model Based on PDF

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaodong; Yang, Chuncheng; Meng, Nina; Peng, Peng

    2018-05-01

    In this paper, we proposed the PDFEMAP (PDF electronic map) that is a kind of new electronic map products aiming at the current situation and demand of the use of electronic map products. Firstly gives the definition and characteristics of PDFEMAP, followed by a detailed description of the data model and method for generating PDFEMAP, and finally expounds application modes of the PDFEMAP which feasibility and effectiveness are verified.

  20. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models: the case study of Denmark.

    PubMed

    Bou Kheir, Rania; Greve, Mogens H; Bøcher, Peder K; Greve, Mette B; Larsen, René; McCloy, Keith

    2010-05-01

    Soil organic carbon (SOC) is one of the most important carbon stocks globally and has large potential to affect global climate. Distribution patterns of SOC in Denmark constitute a nation-wide baseline for studies on soil carbon changes (with respect to Kyoto protocol). This paper predicts and maps the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature, plan curvature, profile curvature, flow accumulation, specific catchment area, tangent slope, tangent curvature, steady-state wetness index, Normalized Difference Vegetation Index (NDVI), Normalized Difference Wetness Index (NDWI) and Soil Color Index (SCI) were generated to statistically explain SOC field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME) and the lowest number of nodes (N) as well are: (i) the tree (T1) combining all of the parameters (ME=29.5%; N=54); (ii) the tree (T2) based on the parent material, soil type and landscape type (ME=31.5%; N=14); and (iii) the tree (T3) constructed using parent material, soil type, landscape type, elevation, tangent slope and SCI (ME=30%; N=39). The produced SOC maps at 1:50,000 cartographic scale using these trees are highly matching with coincidence values equal to 90.5% (Map T1/Map T2), 95% (Map T1/Map T3) and 91% (Map T2/Map T3). The overall accuracies of these maps once compared with field observations were estimated to be 69.54% (Map T1), 68.87% (Map T2) and 69.41% (Map T3). The proposed tree models are relatively simple, and may be also applied to other areas. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Soil mapping and processes modelling for sustainable land management: a review

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Brevik, Eric; Muñoz-Rojas, Miriam; Miller, Bradley; Smetanova, Anna; Depellegrin, Daniel; Misiune, Ieva; Novara, Agata; Cerda, Artemi

    2017-04-01

    Soil maps and models are fundamental for a correct and sustainable land management (Pereira et al., 2017). They are an important in the assessment of the territory and implementation of sustainable measures in urban areas, agriculture, forests, ecosystem services, among others. Soil maps represent an important basis for the evaluation and restoration of degraded areas, an important issue for our society, as consequence of climate change and the increasing pressure of humans on the ecosystems (Brevik et al. 2016; Depellegrin et al., 2016). The understanding of soil spatial variability and the phenomena that influence this dynamic is crucial to the implementation of sustainable practices that prevent degradation, and decrease the economic costs of soil restoration. In this context, soil maps and models are important to identify areas affected by degradation and optimize the resources available to restore them. Overall, soil data alone or integrated with data from other sciences, is an important part of sustainable land management. This information is extremely important land managers and decision maker's implements sustainable land management policies. The objective of this work is to present a review about the advantages of soil mapping and process modeling for sustainable land management. References Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. (2016) Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274. Depellegrin, D.A., Pereira, P., Misiune, I., Egarter-Vigl, L. (2016) Mapping Ecosystem Services in Lithuania. International Journal of Sustainable Development and World Ecology, 23, 441-455. Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B., Smetanova, A., Depellegrin, D., Misiune, I., Novara, A., Cerda, A. (2017) Soil mapping and process modelling for sustainable land management. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006

  2. Quantifying soil burn severity for hydrologic modeling to assess post-fire effects on sediment delivery

    NASA Astrophysics Data System (ADS)

    Dobre, Mariana; Brooks, Erin; Lew, Roger; Kolden, Crystal; Quinn, Dylan; Elliot, William; Robichaud, Pete

    2017-04-01

    Soil erosion is a secondary fire effect with great implications for many ecosystem resources. Depending on the burn severity, topography, and the weather immediately after the fire, soil erosion can impact municipal water supplies, degrade water quality, and reduce reservoirs' storage capacity. Scientists and managers use field and remotely sensed data to quickly assess post-fire burn severity in ecologically-sensitive areas. From these assessments, mitigation activities are implemented to minimize post-fire flood and soil erosion and to facilitate post-fire vegetation recovery. Alternatively, land managers can use fire behavior and spread models (e.g. FlamMap, FARSITE, FOFEM, or CONSUME) to identify sensitive areas a priori, and apply strategies such as fuel reduction treatments to proactively minimize the risk of wildfire spread and increased burn severity. There is a growing interest in linking fire behavior and spread models with hydrology-based soil erosion models to provide site-specific assessment of mitigation treatments on post-fire runoff and erosion. The challenge remains, however, that many burn severity mapping and modeling products quantify vegetation loss rather than measuring soil burn severity. Wildfire burn severity is spatially heterogeneous and depends on the pre-fire vegetation cover, fuel load, topography, and weather. Severities also differ depending on the variable of interest (e.g. soil, vegetation). In the United States, Burned Area Reflectance Classification (BARC) maps, derived from Landsat satellite images, are used as an initial burn severity assessment. BARC maps are classified from either a Normalized Burn Ratio (NBR) or differenced Normalized Burned Ratio (dNBR) scene into four classes (Unburned, Low, Moderate, and High severity). The development of soil burn severity maps requires further manual field validation efforts to transform the BARC maps into a product more applicable for post-fire soil rehabilitation activities. Alternative spectral indices and modeled output approaches may prove better predictors of soil burn severity and hydrologic effects, but these have not yet been assessed in a model framework. In this project we compare field-verified soil burn severity maps to satellite-derived and modeled burn severity maps. We quantify the extent to which there are systematic differences in these mapping products. We then use the Water Erosion Prediction Project (WEPP) hydrologic soil erosion model to assess sediment delivery from these fires using the predicted and observed soil burn severity maps. Finally, we discuss differences in observed and predicted soil burn severity maps and application to watersheds in the Pacific Northwest to estimate post-fire sediment delivery.

  3. Evaluation of Empirical Tropospheric Models Using Satellite-Tracking Tropospheric Wet Delays with Water Vapor Radiometer at Tongji, China

    PubMed Central

    Wang, Miaomiao; Li, Bofeng

    2016-01-01

    An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS) processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing their slant wet delays with those measured by water vapor radiometer based on its satellite-tracking function (collected data with large liquid water path is removed). For all 15 combinations of three tropospheric models and five mapping functions, their accuracies as a function of elevation are statistically analyzed by using nine-day data in two scenarios, with and without meteorological data. The results show that (1) no matter with or without meteorological data, there is no practical difference between mapping functions, i.e., Chao, Ifadis, Vienna Mapping Function 1 (VMF1), Niell Mapping Function (NMF), and MTT Mapping Function (MTT); (2) without meteorological data, the UNB3 is much better than Saastamoinen and Hopfield models, while the Saastamoinen model performed slightly better than the Hopfield model; (3) with meteorological data, the accuracies of all three tropospheric delay models are improved to be comparable, especially for lower elevations. In addition, the kinematic precise point positioning where no parameter is set up for tropospheric delay modification is conducted to further evaluate the performance of tropospheric delay models in positioning accuracy. It is shown that the UNB3 model is best and can achieve about 10 cm accuracy for the N and E coordinate component while 20 cm accuracy for the U coordinate component no matter the meteorological data is available or not. This accuracy can be obtained by the Saastamoinen model only when meteorological data is available, and degraded to 46 cm for the U component if the meteorological data is not available. PMID:26848662

  4. Modeling of depth to base of Last Glacial Maximum and seafloor sediment thickness for the California State Waters Map Series, eastern Santa Barbara Channel, California

    USGS Publications Warehouse

    Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.

    2012-01-01

    Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.

  5. Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations

    PubMed Central

    Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.

    2018-01-01

    Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652

  6. A Model to Aid Topo-Map Interpretation

    ERIC Educational Resources Information Center

    Westerback, Mary

    1976-01-01

    Describes how to construct models of contour lines from flexible, colored bell wire. These models are used to illustrate three-dimensional terrain characteristics represented by contour lines printed on a flat map. (MLH)

  7. EMRinger: side chain–directed model and map validation for 3D cryo-electron microscopy

    DOE PAGES

    Barad, Benjamin A.; Echols, Nathaniel; Wang, Ray Yu-Ruei; ...

    2015-08-17

    Advances in high-resolution cryo-electron microscopy (cryo-EM) require the development of validation metrics to independently assess map quality and model geometry. We report that EMRinger is a tool that assesses the precise fitting of an atomic model into the map during refinement and shows how radiation damage alters scattering from negatively charged amino acids. EMRinger (https://github.com/fraser-lab/EMRinger) will be useful for monitoring progress in resolving and modeling high-resolution features in cryo-EM.

  8. The multiscale coarse-graining method. XI. Accurate interactions based on the centers of charge of coarse-grained sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Zhen; Voth, Gregory A., E-mail: gavoth@uchicago.edu

    It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operatormore » are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.« less

  9. Development of a transformation model to derive general population-based utility: Mapping the pruritus-visual analog scale (VAS) to the EQ-5D utility.

    PubMed

    Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung

    2017-08-01

    Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.

  10. Interactive Web Interface to the Global Strain Rate Map Project

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Estey, L.; Kreemer, C.; Holt, W.

    2004-05-01

    An interactive web interface allows users to explore the results of a global strain rate and velocity model and to compare them to other geophysical observations. The most recent model, an updated version of Kreemer et al., 2003, has 25 independent rigid plate-like regions separated by deformable boundaries covered by about 25,000 grid areas. A least-squares fit was made to 4900 geodetic velocities from 79 different geodetic studies. In addition, Quaternary fault slip rate data are used to infer geologic strain rate estimates (currently only for central Asia). Information about the style and direction of expected strain rate is inferred from the principal axes of the seismic strain rate field. The current model, as well as source data, references and an interactive map tool, are located at the International Lithosphere Program (ILP) "A Global Strain Rate Map (ILP II-8)" project website: http://www-world-strain-map.org. The purpose of the ILP GSRM project is to provide new information from this, and other investigations, that will contribute to a better understanding of continental dynamics and to the quantification of seismic hazards. A unique aspect of the GSRM interactive Java map tool is that the user can zoom in and make custom views of the model grid and results for any area of the globe selecting strain rate and style contour plots and principal axes, observed and model velocity fields in specified frames of reference, and geologic fault data. The results can be displayed with other data sets such Harvard CMT earthquake focal mechanisms, stress directions from the ILP World Stress Map Project, and topography. With the GSRM Java map tool, the user views custom maps generated by a Generic Mapping Tool (GMT) server. These interactive capabilities greatly extend what is possible to present in a published paper. A JavaScript version, using pre-constructed maps, as well as a related information site have also been created for broader education and outreach access. The GSRM map tool will be demonstrated and latest model GSRM 1.1 results, containing important new data for Asia, Iran, western Pacific, and Southern California, will be presented.

  11. Comparative physical mapping between wheat chromosome arm 2BL and rice chromosome 4.

    PubMed

    Lee, Tong Geon; Lee, Yong Jin; Kim, Dae Yeon; Seo, Yong Weon

    2010-12-01

    Physical maps of chromosomes provide a framework for organizing and integrating diverse genetic information. DNA microarrays are a valuable technique for physical mapping and can also be used to facilitate the discovery of single feature polymorphisms (SFPs). Wheat chromosome arm 2BL was physically mapped using a Wheat Genome Array onto near-isogenic lines (NILs) with the aid of wheat-rice synteny and mapped wheat EST information. Using high variance probe set (HVP) analysis, 314 HVPs constituting genes present on 2BL were identified. The 314 HVPs were grouped into 3 categories: HVPs that match only rice chromosome 4 (298 HVPs), those that match only wheat ESTs mapped on 2BL (1), and those that match both rice chromosome 4 and wheat ESTs mapped on 2BL (15). All HVPs were converted into gene sets, which represented either unique rice gene models or mapped wheat ESTs that matched identified HVPs. Comparative physical maps were constructed for 16 wheat gene sets and 271 rice gene sets. Of the 271 rice gene sets, 257 were mapped to the 18-35 Mb regions on rice chromosome 4. Based on HVP analysis and sequence similarity between the gene models in the rice chromosomes and mapped wheat ESTs, the outermost rice gene model that limits the translocation breakpoint to orthologous regions was identified.

  12. International Maps | Geospatial Data Science | NREL

    Science.gov Websites

    International Maps International Maps This map collection provides examples of how geographic information system modeling is used in international resource analysis. The images below are samples of

  13. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  14. Northern Forest Ecosystem Dynamics Using Coupled Models and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Sun, G.; Knox, R. G.; Levine, E. R.; Weishampel, J. F.; Fifer, S. T.

    1999-01-01

    Forest ecosystem dynamics modeling, remote sensing data analysis, and a geographical information system (GIS) were used together to determine the possible growth and development of a northern forest in Maine, USA. Field measurements and airborne synthetic aperture radar (SAR) data were used to produce maps of forest cover type and above ground biomass. These forest attribute maps, along with a conventional soils map, were used to identify the initial conditions for forest ecosystem model simulations. Using this information along with ecosystem model results enabled the development of predictive maps of forest development. The results obtained were consistent with observed forest conditions and expected successional trajectories. The study demonstrated that ecosystem models might be used in a spatial context when parameterized and used with georeferenced data sets.

  15. Analysis of Error Propagation Within Hierarchical Air Combat Models

    DTIC Science & Technology

    2016-06-01

    Model Simulation MANA Map Aware Non-Uniform Automata MCET Mine Warfare Capabilities and Effectiveness Tool MOE measure of effectiveness MOP measure of...model for a two-versus-two air engagement between jet fighters in the stochastic, agent-based Map Aware Non- uniform Automata (MANA) simulation...Master’s thesis, Naval Postgraduate School, Monterey, CA. McIntosh, G. C. (2009). MANA-V (Map aware non-uniform automata – Vector) supplementary manual

  16. Model-Mapped RPA for Determining the Effective Coulomb Interaction

    NASA Astrophysics Data System (ADS)

    Sakakibara, Hirofumi; Jang, Seung Woo; Kino, Hiori; Han, Myung Joon; Kuroki, Kazuhiko; Kotani, Takao

    2017-04-01

    We present a new method to obtain a model Hamiltonian from first-principles calculations. The effective interaction contained in the model is determined on the basis of random phase approximation (RPA). In contrast to previous methods such as projected RPA and constrained RPA (cRPA), the new method named "model-mapped RPA" takes into account the long-range part of the polarization effect to determine the effective interaction in the model. After discussing the problems of cRPA, we present the formulation of the model-mapped RPA, together with a numerical test for the single-band Hubbard model of HgBa2CuO4.

  17. Spatio-temporal water quality mapping from satellite images using geographically and temporally weighted regression

    NASA Astrophysics Data System (ADS)

    Chu, Hone-Jay; Kong, Shish-Jeng; Chang, Chih-Hua

    2018-03-01

    The turbidity (TB) of a water body varies with time and space. Water quality is traditionally estimated via linear regression based on satellite images. However, estimating and mapping water quality require a spatio-temporal nonstationary model, while TB mapping necessitates the use of geographically and temporally weighted regression (GTWR) and geographically weighted regression (GWR) models, both of which are more precise than linear regression. Given the temporal nonstationary models for mapping water quality, GTWR offers the best option for estimating regional water quality. Compared with GWR, GTWR provides highly reliable information for water quality mapping, boasts a relatively high goodness of fit, improves the explanation of variance from 44% to 87%, and shows a sufficient space-time explanatory power. The seasonal patterns of TB and the main spatial patterns of TB variability can be identified using the estimated TB maps from GTWR and by conducting an empirical orthogonal function (EOF) analysis.

  18. Alteration, slope-classified alteration, and potential lahar inundation maps of volcanoes for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Volcano Archive

    USGS Publications Warehouse

    Mars, John C.; Hubbard, Bernard E.; Pieri, David; Linick, Justin

    2015-01-01

    This study was undertaken during 2012–2013 in cooperation with the National Aeronautics and Space Administration (NASA). Since completion of this study, a new lahar modeling program (LAHAR_pz) has been released, which may produce slightly different modeling results from the LAHARZ model used in this study. The maps and data from this study should not be used in place of existing volcano hazard maps published by local authorities. For volcanoes without hazard maps and (or) published lahar-related hazard studies, this work will provide a starting point from which more accurate hazard maps can be produced. This is the first dataset to provide digital maps of altered volcanoes and adjacent watersheds that can be used for assessing volcanic hazards, hydrothermal alteration, and other volcanic processes in future studies.

  19. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  20. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    PubMed

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  1. Retrieval and Mapping of Heavy Metal Concentration in Soil Using Time Series Landsat 8 Imagery

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Xu, L.; Peng, J.; Wang, H.; Wong, A.; Clausi, D. A.

    2018-04-01

    Heavy metal pollution is a critical global environmental problem which has always been a concern. Traditional approach to obtain heavy metal concentration relying on field sampling and lab testing is expensive and time consuming. Although many related studies use spectrometers data to build relational model between heavy metal concentration and spectra information, and then use the model to perform prediction using the hyperspectral imagery, this manner can hardly quickly and accurately map soil metal concentration of an area due to the discrepancies between spectrometers data and remote sensing imagery. Taking the advantage of easy accessibility of Landsat 8 data, this study utilizes Landsat 8 imagery to retrieve soil Cu concentration and mapping its distribution in the study area. To enlarge the spectral information for more accurate retrieval and mapping, 11 single date Landsat 8 imagery from 2013-2017 are selected to form a time series imagery. Three regression methods, partial least square regression (PLSR), artificial neural network (ANN) and support vector regression (SVR) are used to model construction. By comparing these models unbiasedly, the best model are selected to mapping Cu concentration distribution. The produced distribution map shows a good spatial autocorrelation and consistency with the mining area locations.

  2. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    NASA Astrophysics Data System (ADS)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  3. Laminar development of receptive fields, maps and columns in visual cortex: the coordinating role of the subplate.

    PubMed

    Grossberg, Stephen; Seitz, Aaron

    2003-08-01

    How is development of cortical maps in V1 coordinated across cortical layers to form cortical columns? Previous neural models propose how maps of orientation (OR), ocular dominance (OD), and related properties develop in V1. These models show how spontaneous activity, before eye opening, combined with correlation learning and competition, can generate maps similar to those found in vivo. These models have not discussed laminar architecture or how cells develop and coordinate their connections across cortical layers. This is an important problem since anatomical evidence shows that clusters of horizontal connections form, between iso-oriented regions, in layer 2/3 before being innervated by layer 4 afferents. How are orientations in different layers aligned before these connections form? Anatomical evidence demonstrates that thalamic afferents wait in the subplate for weeks before innervating layer 4. Other evidence shows that ablation of the cortical subplate interferes with the development of OR and OD columns. The model proposes how the subplate develops OR and OD maps, which then entrain and coordinate the development of maps in other lamina. The model demonstrates how these maps may develop in layer 4 by using a known transient subplate-to-layer 4 circuit as a teacher. The model subplate also guides the early clustering of horizontal connections in layer 2/3, and the formation of the interlaminar circuitry that forms cortical columns. It is shown how layer 6 develops and helps to stabilize the network when the subplate atrophies. Finally the model clarifies how brain-derived neurotrophic factor (BDNF) manipulations may influence cortical development.

  4. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average OC content predictions for each land cover class compared well between models, with our model always showing smaller standard deviations. We concluded that the chosen model and covariates are appropriate for the prediction of OC content in European mineral soils. We presented in this work the first map of topsoil OC content at European scale based on a harmonised soil dataset. The associated uncertainty map shall support the end-users in a careful use of the predictions.

  5. Key science issues in the central and eastern United States for the next version of the USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Peterson, M.D.; Mueller, C.S.

    2011-01-01

    The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.

  6. Soil mapping and process modeling for sustainable land use management: a brief historical review

    NASA Astrophysics Data System (ADS)

    Brevik, Eric C.; Pereira, Paulo; Muñoz-Rojas, Miriam; Miller, Bradley A.; Cerdà, Artemi; Parras-Alcántara, Luis; Lozano-García, Beatriz

    2017-04-01

    Basic soil management goes back to the earliest days of agricultural practices, approximately 9,000 BCE. Through time humans developed soil management techniques of ever increasing complexity, including plows, contour tillage, terracing, and irrigation. Spatial soil patterns were being recognized as early as 3,000 BCE, but the first soil maps didn't appear until the 1700s and the first soil models finally arrived in the 1880s (Brevik et al., in press). The beginning of the 20th century saw an increase in standardization in many soil science methods and wide-spread soil mapping in many parts of the world, particularly in developed countries. However, the classification systems used, mapping scale, and national coverage varied considerably from country to country. Major advances were made in pedologic modeling starting in the 1940s, and in erosion modeling starting in the 1950s. In the 1970s and 1980s advances in computing power, remote and proximal sensing, geographic information systems (GIS), global positioning systems (GPS), and statistics and spatial statistics among other numerical techniques significantly enhanced our ability to map and model soils (Brevik et al., 2016). These types of advances positioned soil science to make meaningful contributions to sustainable land use management as we moved into the 21st century. References Brevik, E., Pereira, P., Muñoz-Rojas, M., Miller, B., Cerda, A., Parras-Alcantara, L., Lozano-Garcia, B. Historical perspectives on soil mapping and process modelling for sustainable land use management. In: Pereira, P., Brevik, E., Muñoz-Rojas, M., Miller, B. (eds) Soil mapping and process modelling for sustainable land use management (In press). Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. 2016. Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274.

  7. Mapping soil texture targeting predefined depth range or synthetizing from standard layers?

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Dezső Kaposi, András; Szatmári, Gábor; Takács, Katalin; Pásztor, László

    2017-04-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. Physical soil properties, especially particle size distribution play important role in this context. A few of the requirements can be satisfied by the sand-, silt-, and clay content maps compiled according to global standards such as GlobalSoilMap (GSM) or Soil Grids. Soil texture classes (e. g. according to USDA classification) can be derived from these three fraction data, in this way texture map can be compiled based on the proper separate maps. Soil texture class as well as fraction information represent direct input of crop-, meteorological- and hydrological models. The model inputs frequently require maps representing soil features of 0-30 cm depth, which is covered by three consecutive depth intervals according to standard specifications: 0-5 cm, 5-15 cm, 15-30 cm. Becoming GSM and SoilGrids the most detailed freely available spatial soil data sources, the common model users (e. g. meteorologists, agronomists, or hydrologists) would produce input map from (the weighted mean of) these three layers. However, if the basic soil data and proper knowledge is obtainable, a soil texture map targeting directly the 0-30 cm layer could be independently compiled. In our work we compared Hungary's soil texture maps compiled using the same reference and auxiliary data and inference methods but for differing layer distribution. We produced the 0-30 cm clay, silt and sand map as well as the maps for the three standard layers (0-5 cm, 5-15 cm, 15-30 cm). Maps of sand, silt and clay percentage were computed through regression kriging (RK) applying Additive Log-Ratio (alr) transformation. In addition to the Hungarian Soil Information and Monitoring System as reference soil data, digital elevation model and its derived components, soil physical property maps, remotely sensed images, land use -, geological-, as well as meteorological data were applied as auxiliary variables. We compared the directly compiled and the synthetized clay-, sand content, and texture class maps by different tools. In addition to pairwise comparison of basic statistical features (histograms, scatter plots), we examined the spatial distribution of the differences. We quantified the taxonomical distances of the textural classes, in order to investigate the differences of the map-pairs. We concluded that the directly computed and the synthetized maps show various differences. In the case of clay-, and sand content maps, the map-pairs have to be considered statistically different. On the other hand, the differences of the texture class maps are not significant. However, in all cases, the differences rather concern the extreme ranges and categories. Using of synthetized maps can intensify extremities by error propagation in models and scenarios. Based on our results, we suggest the usage of the directly composed maps.

  8. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Real-Time Large-Scale Dense Mapping with Surfels

    PubMed Central

    Fu, Xingyin; Zhu, Feng; Wu, Qingxiao; Sun, Yunlei; Lu, Rongrong; Yang, Ruigang

    2018-01-01

    Real-time dense mapping systems have been developed since the birth of consumer RGB-D cameras. Currently, there are two commonly used models in dense mapping systems: truncated signed distance function (TSDF) and surfel. The state-of-the-art dense mapping systems usually work fine with small-sized regions. The generated dense surface may be unsatisfactory around the loop closures when the system tracking drift grows large. In addition, the efficiency of the system with surfel model slows down when the number of the model points in the map becomes large. In this paper, we propose to use two maps in the dense mapping system. The RGB-D images are integrated into a local surfel map. The old surfels that reconstructed in former times and far away from the camera frustum are moved from the local map to the global map. The updated surfels in the local map when every frame arrives are kept bounded. Therefore, in our system, the scene that can be reconstructed is very large, and the frame rate of our system remains high. We detect loop closures and optimize the pose graph to distribute system tracking drift. The positions and normals of the surfels in the map are also corrected using an embedded deformation graph so that they are consistent with the updated poses. In order to deal with large surface deformations, we propose a new method for constructing constraints with system trajectories and loop closure keyframes. The proposed new method stabilizes large-scale surface deformation. Experimental results show that our novel system behaves better than the prior state-of-the-art dense mapping systems. PMID:29747450

  10. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. VIII. A WIDE-AREA, HIGH-RESOLUTION MAP OF DUST EXTINCTION IN M31

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalcanton, Julianne J.; Fouesneau, Morgan; Weisz, Daniel R.

    We map the distribution of dust in M31 at 25 pc resolution using stellar photometry from the Panchromatic Hubble Andromeda Treasury survey. The map is derived with a new technique that models the near-infrared color–magnitude diagram (CMD) of red giant branch (RGB) stars. The model CMDs combine an unreddened foreground of RGB stars with a reddened background population viewed through a log-normal column density distribution of dust. Fits to the model constrain the median extinction, the width of the extinction distribution, and the fraction of reddened stars in each 25 pc cell. The resulting extinction map has a factor ofmore » ≳4 times better resolution than maps of dust emission, while providing a more direct measurement of the dust column. There is superb morphological agreement between the new map and maps of the extinction inferred from dust emission by Draine et al. However, the widely used Draine and Li dust models overpredict the observed extinction by a factor of ∼2.5, suggesting that M31's true dust mass is lower and that dust grains are significantly more emissive than assumed in Draine et al. The observed factor of ∼2.5 discrepancy is consistent with similar findings in the Milky Way by the Plank Collaboration et al., but we find a more complex dependence on parameters from the Draine and Li dust models. We also show that the the discrepancy with the Draine et al. map is lowest where the current interstellar radiation field has a harder spectrum than average. We discuss possible improvements to the CMD dust mapping technique, and explore further applications in both M31 and other galaxies.« less

  11. Using remote sensing in support of environmental management: A framework for selecting products, algorithms and methods.

    PubMed

    de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn

    2016-11-01

    Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Incorporating Land-Use Mapping Uncertainty in Remote Sensing Based Calibration of Land-Use Change Models

    NASA Astrophysics Data System (ADS)

    Cockx, K.; Van de Voorde, T.; Canters, F.; Poelmans, L.; Uljee, I.; Engelen, G.; de Jong, K.; Karssenberg, D.; van der Kwast, J.

    2013-05-01

    Building urban growth models typically involves a process of historic calibration based on historic time series of land-use maps, usually obtained from satellite imagery. Both the remote sensing data analysis to infer land use and the subsequent modelling of land-use change are subject to uncertainties, which may have an impact on the accuracy of future land-use predictions. Our research aims to quantify and reduce these uncertainties by means of a particle filter data assimilation approach that incorporates uncertainty in land-use mapping and land-use model parameter assessment into the calibration process. This paper focuses on part of this work, more in particular the modelling of uncertainties associated with the impervious surface cover estimation and urban land-use classification adopted in the land-use mapping approach. Both stages are submitted to a Monte Carlo simulation to assess their relative contribution to and their combined impact on the uncertainty in the derived land-use maps. The approach was applied on the central part of the Flanders region (Belgium), using a time-series of Landsat/SPOT-HRV data covering the years 1987, 1996, 2005 and 2012. Although the most likely land-use map obtained from the simulation is very similar to the original classification, it is shown that the errors related to the impervious surface sub-pixel fraction estimation have a strong impact on the land-use map's uncertainty. Hence, incorporating uncertainty in the land-use change model calibration through particle filter data assimilation is proposed to address the uncertainty observed in the derived land-use maps and to reduce uncertainty in future land-use predictions.

  13. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    NASA Astrophysics Data System (ADS)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  14. Map and map database of susceptibility to slope failure by sliding and earthflow in the Oakland area, California

    USGS Publications Warehouse

    Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven

    2001-01-01

    Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area where the three basic constituents, a geologic map, a landslide inventory, and a slope map, are available in digital form. Added predictive power of the new susceptibility model may reside in attributes that remain to be explored?among them seismic shaking, distance to nearest road, and terrain elevation, aspect, relief, and curvature.

  15. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    PubMed

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.

  16. Distribution of submerged aquatic vegetation in the St. Louis River estuary: Maps and models

    EPA Science Inventory

    In late summer of 2011 and 2012 we used echo-sounding gear to map the distribution of submerged aquatic vegetation (SAV) in the St. Louis River Estuary (SLRE). From these data we produced maps of SAV distribution and we created logistic models to predict the probability of occurr...

  17. A Developmental Mapping Program Integrating Geography and Mathematics.

    ERIC Educational Resources Information Center

    Muir, Sharon Pray; Cheek, Helen Neely

    Presented and discussed is a model which can be used by educators who want to develop an interdisciplinary map skills program in geography and mathematics. The model assumes that most children in elementary schools perform cognitively at Piaget's concrete operational stage, that readiness for map skills can be assessed with Piagetian or…

  18. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

  19. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  20. Implementation of structure-mapping inference by event-file binding and action planning: a model of tool-improvisation analogies.

    PubMed

    Fields, Chris

    2011-03-01

    Structure-mapping inferences are generally regarded as dependent upon relational concepts that are understood and expressible in language by subjects capable of analogical reasoning. However, tool-improvisation inferences are executed by members of a variety of non-human primate and other species. Tool improvisation requires correctly inferring the motion and force-transfer affordances of an object; hence tool improvisation requires structure mapping driven by relational properties. Observational and experimental evidence can be interpreted to indicate that structure-mapping analogies in tool improvisation are implemented by multi-step manipulation of event files by binding and action-planning mechanisms that act in a language-independent manner. A functional model of language-independent event-file manipulations that implement structure mapping in the tool-improvisation domain is developed. This model provides a mechanism by which motion and force representations commonly employed in tool-improvisation structure mappings may be sufficiently reinforced to be available to inwardly directed attention and hence conceptualization. Predictions and potential experimental tests of this model are outlined.

  1. Fractional Snow Cover Mapping by Artificial Neural Networks and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Çiftçi, B. B.; Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2017-11-01

    Snow is an important land cover whose distribution over space and time plays a significant role in various environmental processes. Hence, snow cover mapping with high accuracy is necessary to have a real understanding for present and future climate, water cycle, and ecological changes. This study aims to investigate and compare the design and use of artificial neural networks (ANNs) and support vector machines (SVMs) algorithms for fractional snow cover (FSC) mapping from satellite data. ANN and SVM models with different model building settings are trained by using Moderate Resolution Imaging Spectroradiometer surface reflectance values of bands 1-7, normalized difference snow index and normalized difference vegetation index as predictor variables. Reference FSC maps are generated from higher spatial resolution Landsat ETM+ binary snow cover maps. Results on the independent test data set indicate that the developed ANN model with hyperbolic tangent transfer function in the output layer and the SVM model with radial basis function kernel produce high FSC mapping accuracies with the corresponding values of R = 0.93 and R = 0.92, respectively.

  2. Documentation for the 2014 update of the United States national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.

    2014-01-01

    The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.

  3. How well should probabilistic seismic hazard maps work?

    NASA Astrophysics Data System (ADS)

    Vanneste, K.; Stein, S.; Camelbeeck, T.; Vleminckx, B.

    2016-12-01

    Recent large earthquakes that gave rise to shaking much stronger than shown in earthquake hazard maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSHA model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating the shaking history of an area with assumed distribution of earthquakes, frequency-magnitude relation, temporal occurrence model, and ground-motion prediction equation. We compare the "observed" shaking at many sites over time to that predicted by a hazard map generated for the same set of parameters. PSHA predicts that the fraction of sites at which shaking will exceed that mapped is p = 1 - exp(t/T), where t is the duration of observations and T is the map's return period. This implies that shaking in large earthquakes is typically greater than shown on hazard maps, as has occurred in a number of cases. A large number of simulated earthquake histories yield distributions of shaking consistent with this forecast, with a scatter about this value that decreases as t/T increases. The median results are somewhat lower than predicted for small values of t/T and approach the predicted value for larger values of t/T. Hence, the algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. Validation is more complicated because a real observed earthquake history can yield a fractional exceedance significantly higher or lower than that predicted while still being consistent with the hazard map in question. As a result, given that in the real world we have only a single sample, it is hard to assess whether a misfit between a map and observations arises by chance or reflects a biased map.

  4. Recent advances in parametric neuroreceptor mapping with dynamic PET: basic concepts and graphical analyses.

    PubMed

    Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung

    2014-10-01

    Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.

  5. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  6. Methods for landslide susceptibility modelling in Lower Austria

    NASA Astrophysics Data System (ADS)

    Bell, Rainer; Petschko, Helene; Glade, Thomas; Leopold, Philip; Heiss, Gerhard; Proske, Herwig; Granica, Klaus; Schweigl, Joachim; Pomaroli, Gilbert

    2010-05-01

    Landslide susceptibility modelling and implementation of the resulting maps is still a challenge for geoscientists, spatial and infrastructure planners. Particularly on a regional scale landslide processes and their dynamics are poorly understood. Furthermore, the availability of appropriate spatial data in high resolution is often a limiting factor for modelling high quality landslide susceptibility maps for large study areas. However, these maps form an important basis for preventive spatial planning measures. Thus, new methods have to be developed, especially focussing on the implementation of final maps into spatial planning processes. The main objective of the project "MoNOE" (Method development for landslide susceptibility modelling in Lower Austria) is to design a method for landslide susceptibility modelling for a large study area (about 10.200 km²) and to produce landslide susceptibility maps which are finally implemented in the spatial planning strategies of the Federal state of Lower Austria. The project focuses primarily on the landslide types fall and slide. To enable susceptibility modelling, landslide inventories for the respective landslide types must be compiled and relevant data has to be gathered, prepared and homogenized. Based on this data new methods must be developed to tackle the needs of the spatial planning strategies. Considerable efforts will also be spent on the validation of the resulting maps for each landslide type. A great challenge will be the combination of the susceptibility maps for slides and falls in just one single susceptibility map (which is requested by the government) and the definition of the final visualisation. Since numerous landslides have been favoured or even triggered by human impact, the human influence on landslides will also have to be investigated. Furthermore possibilities to integrate respective findings in regional susceptibility modelling will be explored. According to these objectives the project is structured in four work packages namely data preparation and homogenization (WP1), susceptibility modelling and validation (WP2), integrative susceptibility assessment (WP3) and human impact (WP4). The expected results are a landslide inventory map covering all endangered parts of the Federal state of Lower Austria, a land cover map of Lower Austria with high spatial resolution, processed spatial input data and an optimized integrative susceptibility map visualized at a scale of 1:25.000. The structure of the research project, research strategies as well as first results will be presented at the conference. The project is funded by the Federal state government of Lower Austria.

  7. Literature Review on Systems of Systems (SoS): A Methodology With Preliminary Results

    DTIC Science & Technology

    2013-11-01

    Appendix H. The Enhanced ISAAC Neural Simulation Toolkit (EINSTein) 73  Appendix I. The Map Aware Nonuniform Automata (MANA) Agent-Based Model 81...83  Figure I-3. Quadrant chart addressing SoS and associated SoSA designs for the Map Aware Nonuniform Automata (MANA) agent...Map Aware Nonuniform Automata (MANA) agent-based model. 85  Table I-2. SoS and SoSA software component maturation scores associated with the Map

  8. A Watered-Down Topographic Map. Submarine Ring of Fire--Grades 6-8. Topographic and Bathymetric Maps.

    ERIC Educational Resources Information Center

    National Oceanic and Atmospheric Administration (DOC), Rockville, MD.

    This activity is designed to teach about topographic maps and bathymetric charts. Students are expected to create a topographic map from a model landform, interpret a simple topographic map, and explain the difference between topographic and bathymetric maps. The activity provides learning objectives, a list of needed materials, key vocabulary…

  9. Thermal and albedo mapping of the north and south polar regions of Mars

    NASA Technical Reports Server (NTRS)

    Paige, D. A.; Keegan, K. D.

    1991-01-01

    The first maps are presented of the north and south polar regions of Mars. The thermal properties of the midlatitude regions from -60 deg to +60 deg latitude were mapped in previous studies. The presented maps complete the mapping of entire planet. The maps for the north and south polar regions were derived from Viking Infrared Thermal Mapper (IRTM) observations. Best fit thermal inertias were determined by comparing the available IRTM 20 micron channel brightness within a given region to surface temperatures computed by a diurnal and seasonal thermal model. The model assumes no atmospheric contributions to the surface heat balance. The resulting maps of apparent thermal inertia and average IRTM measured solar channel lambert albedo for the north and south polar regions from the poles to +/- 60 deg latitude.

  10. Optimization of Causative Factors for Landslide Susceptibility Evaluation Using Remote Sensing and GIS Data in Parts of Niigata, Japan.

    PubMed

    Dou, Jie; Tien Bui, Dieu; Yunus, Ali P; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan

    2015-01-01

    This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner.

  11. Optimization of Causative Factors for Landslide Susceptibility Evaluation Using Remote Sensing and GIS Data in Parts of Niigata, Japan

    PubMed Central

    Dou, Jie; Tien Bui, Dieu; P. Yunus, Ali; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan

    2015-01-01

    This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner. PMID:26214691

  12. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.

    2015-06-01

    SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.

  13. Marine Air Penetration: The Effect of Synoptic-scale Change on Regional Climate

    NASA Astrophysics Data System (ADS)

    Wang, M.; Ullrich, P. A.

    2016-12-01

    Marine air penetration (MAP) around the California San Francisco Bay Delta region has a pronounced impact on local temperature and air quality, and is highly correlated with inland wind penetration and hence wind power generation. Observational MAP criteria are defined based on the 900hPa across-shore wind speed greater than or equal to 3m/s at the Oakland radiosonde station, and a surface temperature difference greater than or equal to 7 degrees Celsius between two California Irrigation Management Information System (CIMIS) stations at Fresno, CA and Lodi, CA. This choice reflects marine cooling of Lodi, and was found to be highly correlated with inland specific humidity and breeze front activity. The observational MAP criteria were tuned based on small biases from Climate Forecast System Reanalysis (CFSR) to selected MAP days from CFSR, to identify synoptic-scale indicators associated with MAP events. A multivariate logistic regression model was constructed based on the selected five synoptic indicators from CFSR and demonstrated good model performance. Two synoptic-scale patterns were identified and analyzed out of the 32 categories from the regression model, suggesting a strong influence from the off-shore trough and the inland thermal ridge on MAP events. Future projection of MAP events included the 21st century Coupled Model Intercomparison Project Phase 5 (CMIP5), and Variable resolution in the Community Earth System Model (VR-CESM). Both showed no statistically significant trend associated with MAP events through the end of this century under both Representative Concentration Pathways (RCP) 2.6 and RCP 8.5.

  14. Application of a neuro-fuzzy model to landslide-susceptibility mapping for shallow landslides in a tropical hilly area

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Pradhan, Biswajeet

    2011-09-01

    This paper presents landslide-susceptibility mapping using an adaptive neuro-fuzzy inference system (ANFIS) using a geographic information system (GIS) environment. In the first stage, landslide locations from the study area were identified by interpreting aerial photographs and supported by an extensive field survey. In the second stage, landslide-related conditioning factors such as altitude, slope angle, plan curvature, distance to drainage, distance to road, soil texture and stream power index (SPI) were extracted from the topographic and soil maps. Then, landslide-susceptible areas were analyzed by the ANFIS approach and mapped using landslide-conditioning factors. In particular, various membership functions (MFs) were applied for the landslide-susceptibility mapping and their results were compared with the field-verified landslide locations. Additionally, the receiver operating characteristics (ROC) curve for all landslide susceptibility maps were drawn and the areas under curve values were calculated. The ROC curve technique is based on the plotting of model sensitivity — true positive fraction values calculated for different threshold values, versus model specificity — true negative fraction values, on a graph. Landslide test locations that were not used during the ANFIS modeling purpose were used to validate the landslide susceptibility maps. The validation results revealed that the susceptibility maps constructed by the ANFIS predictive models using triangular, trapezoidal, generalized bell and polynomial MFs produced reasonable results (84.39%), which can be used for preliminary land-use planning. Finally, the authors concluded that ANFIS is a very useful and an effective tool in regional landslide susceptibility assessment.

  15. Application of Ifsar Technology in Topographic Mapping: JUPEM's Experience

    NASA Astrophysics Data System (ADS)

    Zakaria, Ahamad

    2018-05-01

    The application of Interferometric Synthetic Aperture Radar (IFSAR) in topographic mapping has increased during the past decades. This is due to the advantages that IFSAR technology offers in solving data acquisition problems in tropical regions. Unlike aerial photography, radar technology offers wave penetration through cloud cover, fog and haze. As a consequence, images can be made free of any natural phenomenon defects. In Malaysia, Department of Survey and Mapping Malaysia (JUPEM) has been utilizing the IFSAR products since 2009 to update topographic maps at 1 : 50,000 map scales. Orthorectified radar imagery (ORI), Digital Surface Models (DSM) and Digital Terrain Models (DTM) procured under the project have been further processed before the products are ingested into a revamped mapping workflow consisting of stereo and mono digitizing processes. The paper will highlight the experience of Department of Survey and Mapping Malaysia (DSMM)/ JUPEM in using such technology in order to speed up mapping production.

  16. Remote sensing sensors and applications in environmental resources mapping and modeling

    USGS Publications Warehouse

    Melesse, Assefa M.; Weng, Qihao; Thenkabail, Prasad S.; Senay, Gabriel B.

    2007-01-01

    The history of remote sensing and development of different sensors for environmental and natural resources mapping and data acquisition is reviewed and reported. Application examples in urban studies, hydrological modeling such as land-cover and floodplain mapping, fractional vegetation cover and impervious surface area mapping, surface energy flux and micro-topography correlation studies is discussed. The review also discusses the use of remotely sensed-based rainfall and potential evapotranspiration for estimating crop water requirement satisfaction index and hence provides early warning information for growers. The review is not an exhaustive application of the remote sensing techniques rather a summary of some important applications in environmental studies and modeling.

  17. Developing Vs30 site-condition maps by combining observations with geologic and topographic constraints

    USGS Publications Warehouse

    Thompson, E.M.; Wald, D.J.

    2012-01-01

    Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30 m (VS30) remains widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). As such, we are developing an improved strategy for producing VS30 maps given the common observational constraints. Using the abundant VS30 measurements in Taiwan, we compare alternative mapping methods that combine topographic slope, surface geology, and spatial correlation structure. The different VS30 mapping algorithms are distinguished by the way that slope and geology are combined to define a spatial model of VS30. We consider the globally applicable slope-only model as a baseline to which we compare two methods of combining both slope and geology. For both hybrid approaches, we model spatial correlation structure of the residuals using the kriging-with-a-trend technique, which brings the map into closer agreement with the observations. Cross validation indicates that we can reduce the uncertainty of the VS30 map by up to 16% relative to the slope-only approach.

  18. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  19. Issues of tsunami hazard maps revealed by the 2011 Tohoku tsunami

    NASA Astrophysics Data System (ADS)

    Sugimoto, M.

    2013-12-01

    Tsunami scientists are imposed responsibilities of selection for people's tsunami evacuation place after the 2011 Tohoku Tsunami in Japan. A lot of matured people died out of tsunami hazard zone based on tsunami hazard map though students made a miracle by evacuation on their own judgment in Kamaishi city. Tsunami hazard maps were based on numerical model smaller than actual magnitude 9. How can we bridge the gap between hazard map and future disasters? We have to discuss about using tsunami numerical model better enough to contribute tsunami hazard map. How do we have to improve tsunami hazard map? Tsunami hazard map should be revised included possibility of upthrust or downthrust after earthquakes and social information. Ground sank 1.14m below sea level in Ayukawa town, Tohoku. Ministry of Land, Infrastructure, Transport and Tourism's research shows around 10% people know about tsunami hazard map in Japan. However, people know about their evacuation places (buildings) through experienced drills once a year even though most people did not know about tsunami hazard map. We need wider spread of tsunami hazard with contingency of science (See the botom disaster handbook material's URL). California Emergency Management Agency (CEMA) team practically shows one good practice and solution to me. I followed their field trip in Catalina Island, California in Sep 2011. A team members are multidisciplinary specialists: A geologist, a GIS specialist, oceanographers in USC (tsunami numerical modeler) and a private company, a local policeman, a disaster manager, a local authority and so on. They check field based on their own specialties. They conduct an on-the-spot inspection of ambiguous locations between tsunami numerical model and real field conditions today. The data always become older. They pay attention not only to topographical conditions but also to social conditions: vulnerable people, elementary schools and so on. It takes a long time to check such field information, however tsunami hazard map based on numerical model should be this process. Tsunami scientists should not enter into the inhumane business by using tsunami numerical model. It includes accountability to society therefore scientists need scientific ethics and humanitarian attention. Should only tsunami scientist have responsibility for human life? Multidisciplinary approach is essential for mitigation like CEMA. I am taking on hazard map training course for disaster management officers from developing countries in JICA training course. I would like to discuss how to improve tsunami hazard map after the 2011 Tohoku tsunami experience in this presentation. A multidisciplinary exparts team of CEMA's tsunami hazard map

  20. Mapping local and global variability in plant trait distributions

    DOE PAGES

    Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...

    2017-12-01

    Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less

  1. An imaging-based stochastic model for simulation of tumour vasculature

    NASA Astrophysics Data System (ADS)

    Adhikarla, Vikram; Jeraj, Robert

    2012-10-01

    A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies.

  2. Linking in situ LAI and fine resolution remote sensing data to map reference LAI over cropland and grassland using geostatistical regression method

    NASA Astrophysics Data System (ADS)

    He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua

    2016-08-01

    Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.

  3. Mapping local and global variability in plant trait distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc

    Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less

  4. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  5. A framework for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Li, Zheng; Li, Shuliang; Zhang, Yanyan

    2015-07-01

    There is still a lack of effective paradigms and tools for analysing and discovering the contents and relationships of project knowledge contexts in the field of project management. In this paper, a new framework for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps under big data environments is proposed and developed. The conceptual paradigm, theoretical underpinning, extended topic model, and illustration examples of the ontology model for project knowledge maps are presented, with further research work envisaged.

  6. Mesh versus bathtub - effects of flood models on exposure analysis in Switzerland

    NASA Astrophysics Data System (ADS)

    Röthlisberger, Veronika; Zischg, Andreas; Keiler, Margreth

    2016-04-01

    In Switzerland, mainly two types of maps that indicate potential flood zones are available for flood exposure analyses: 1) Aquaprotect, a nationwide overview provided by the Federal Office for the Environment and 2) communal flood hazard maps available from the 26 cantons. The model used to produce Aquaprotect can be described as a bathtub approach or linear superposition method with three main parameters, namely the horizontal and vertical distance of a point to water features and the size of the river sub-basin. Whereas the determination of flood zones in Aquaprotect is based on a uniform, nationwide model, the communal flood hazard maps are less homogenous, as they have been elaborated either at communal or cantonal levels. Yet their basic content (i.e. indication of potential flood zones for three recurrence periods, with differentiation of at least three inundation depths) is described in national directives and the vast majority of communal flood hazard maps are based on 2D inundation simulations using meshes. Apart from the methodical differences between Aquaprotect and the communal flood hazard maps (and among different communal flood hazard maps), all of these maps include a layer with a similar recurrence period (i.e. Aquaprotect 250 years, flood hazard maps 300 years) beyond the intended protection level of installed structural systems. In our study, we compare the resulting exposure by overlaying the two types of flood maps with a complete, harmonized, and nationwide dataset of building polygons. We assess the different exposure at the national level, and also consider differences among the 26 cantons and the six biogeographically unique regions, respectively. It was observed that while the nationwide exposure rates for both types of flood maps are similar, the differences within certain cantons and biogeographical regions are remarkable. We conclude that flood maps based on bathtub models are appropriate for assessments at national levels, while maps based on 2D simulations are preferable at sub-national levels.

  7. Application of an adaptive neuro-fuzzy inference system to ground subsidence hazard mapping

    NASA Astrophysics Data System (ADS)

    Park, Inhye; Choi, Jaewon; Jin Lee, Moung; Lee, Saro

    2012-11-01

    We constructed hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok City, Korea, using an adaptive neuro-fuzzy inference system (ANFIS) and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, and ground subsidence maps. An attribute database was also constructed from field investigations and reports on existing ground subsidence areas at the study site. Five major factors causing ground subsidence were extracted: (1) depth of drift; (2) distance from drift; (3) slope gradient; (4) geology; and (5) land use. The adaptive ANFIS model with different types of membership functions (MFs) was then applied for ground subsidence hazard mapping in the study area. Two ground subsidence hazard maps were prepared using the different MFs. Finally, the resulting ground subsidence hazard maps were validated using the ground subsidence test data which were not used for training the ANFIS. The validation results showed 95.12% accuracy using the generalized bell-shaped MF model and 94.94% accuracy using the Sigmoidal2 MF model. These accuracy results show that an ANFIS can be an effective tool in ground subsidence hazard mapping. Analysis of ground subsidence with the ANFIS model suggests that quantitative analysis of ground subsidence near AUCMs is possible.

  8. Measuring novices' field mapping abilities using an in-class exercise based on expert task analysis

    NASA Astrophysics Data System (ADS)

    Caulkins, J. L.

    2010-12-01

    We are interested in developing a model of expert-like behavior for improving the teaching methods of undergraduate field geology. Our aim is to assist students in mastering the process of field mapping more efficiently and effectively and to improve their ability to think creatively in the field. To examine expert-mapping behavior, a cognitive task analysis was conducted with expert geologic mappers in an attempt to define the process of geologic mapping (i.e. to understand how experts carry out geological mapping). The task analysis indicates that expert mappers have a wealth of geologic scenarios at their disposal that they compare against examples seen in the field, experiences that most undergraduate mappers will not have had. While presenting students with many geological examples in class may increase their understanding of geologic processes, novices still struggle when presented with a novel field situation. Based on the task analysis, a short (45-minute) paper-map-based exercise was designed and tested with 14 pairs of 3rd year geology students. The exercise asks students to generate probable geologic models based on a series of four (4) data sets. Each data set represents a day’s worth of data; after the first “day,” new sheets simply include current and previously collected data (e.g. “Day 2” data set includes data from “Day 1” plus the new “Day 2” data). As the geologic complexity increases, students must adapt, reject or generate new geologic models in order to fit the growing data set. Preliminary results of the exercise indicate that students who produced more probable geologic models, and produced higher ratios of probable to improbable models, tended to go on to do better on the mapping exercises at the 3rd year field school. These results suggest that those students with more cognitively available geologic models may be more able to use these models in field settings than those who are unable to draw on these models for whatever reason. Giving students practice at generating geologic models to explain data may be useful in preparing our students for field mapping exercises.

  9. A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2013-02-01

    The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e.g., DT, SVM and ANFIS) is viable. As far as the performance of the models are concerned, the results appeared to be quite satisfactory, i.e., the zones determined on the map being zones of relative susceptibility.

  10. Assimilation of optical and radar remote sensing data in 3D mapping of soil properties over large areas.

    PubMed

    Poggio, Laura; Gimona, Alessandro

    2017-02-01

    Soil is very important for many land functions. To achieve sustainability it is important to understand how soils vary over space in the landscape. Remote sensing data can be instrumental in mapping and spatial modelling of soil properties, resources and their variability. The aims of this study were to compare satellite sensors (MODIS, Landsat, Sentinel-1 and Sentinel-2) with varying spatial, temporal and spectral resolutions for Digital Soil Mapping (DSM) of a set of soil properties in Scotland, evaluate the potential benefits of adding Sentinel-1 data to DSM models, select the most suited mix of sensors for DSM to map the considered set of soil properties and validate the results of topsoil (2D) and whole profile (3D) models. The results showed that the use of a mixture of sensors proved more effective to model and map soil properties than single sensors. The use of radar Sentinel-1 data proved useful for all soil properties, improving the prediction capability of models with only optical bands. The use of MODIS time series provided stronger relationships than the use of temporal snapshots. The results showed good validation statistics with a RMSE below 20% of the range for all considered soil properties. The RMSE improved from previous studies including only MODIS sensor and using a coarser prediction grid. The performance of the models was similar to previous studies at regional, national or continental scale. A mix of optical and radar data proved useful to map soil properties along the profile. The produced maps of soil properties describing both lateral and vertical variability, with associated uncertainty, are important for further modelling and management of soil resources and ecosystem services. Coupled with further data the soil properties maps could be used to assess soil functions and therefore conditions and suitability of soils for a range of purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Design and analysis for thematic map accuracy assessment: Fundamental principles

    Treesearch

    Stephen V. Stehman; Raymond L. Czaplewski

    1998-01-01

    Land-cover maps are used in numerous natural resource applications to describe the spatial distribution and pattern of land-cover, to estimate areal extent of various cover classes, or as input into habitat suitability models, land-cover change analyses, hydrological models, and risk analyses. Accuracy assessment quantifies data quality so that map users may evaluate...

  12. Mapping marine habitat suitability and uncertainty of Bayesian networks: a case study using Pacific benthic macrofauna

    Treesearch

    Andrea Havron; Chris Goldfinger; Sarah Henkel; Bruce G. Marcot; Chris Romsos; Lisa Gilbane

    2017-01-01

    Resource managers increasingly use habitat suitability map products to inform risk management and policy decisions. Modeling habitat suitability of data-poor species over large areas requires careful attention to assumptions and limitations. Resulting habitat suitability maps can harbor uncertainties from data collection and modeling processes; yet these limitations...

  13. Burn severity mapping using simulation modeling and satellite imagery

    Treesearch

    Eva C. Karau; Robert E. Keane

    2010-01-01

    Although burn severity maps derived from satellite imagery provide a landscape view of fire impacts, fire effects simulation models can provide spatial fire severity estimates and add a biotic context in which to interpret severity. In this project, we evaluated two methods of mapping burn severity in the context of rapid post-fire assessment for four wildfires in...

  14. The research of selection model based on LOD in multi-scale display of electronic map

    NASA Astrophysics Data System (ADS)

    Zhang, Jinming; You, Xiong; Liu, Yingzhen

    2008-10-01

    This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.

  15. Integrating Vegetation Classification, Mapping, and Strategic Inventory for Forest Management

    Treesearch

    C. K. Brewer; R. Bush; D. Berglund; J. A. Barber; S. R. Brown

    2006-01-01

    Many of the analyses needed to address multiple resource issues are focused on vegetation pattern and process relationships and most rely on the data models produced from vegetation classification, mapping, and/or inventory. The Northern Region Vegetation Mapping Project (R1-VMP) data models are based on these three integrally related, yet separate processes. This...

  16. TESTING TREE-CLASSIFIER VARIANTS AND ALTERNATE MODELING METHODOLOGIES IN THE EAST GREAT BASIN MAPPING UNIT OF THE SOUTHWEST REGIONAL GAP ANALYSIS PROJECT (SW REGAP)

    EPA Science Inventory

    We tested two methods for dataset generation and model construction, and three tree-classifier variants to identify the most parsimonious and thematically accurate mapping methodology for the SW ReGAP project. Competing methodologies were tested in the East Great Basin mapping un...

  17. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  18. Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies

    PubMed Central

    Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.

    2009-01-01

    Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601

  19. How much expert knowledge is it worth to put in conceptual hydrological models?

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Zappa, Massimiliano

    2017-04-01

    Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.

  20. Multi-frequency parameter mapping of electrical impedance scanning using two kinds of circuit model.

    PubMed

    Liu, Ruigang; Dong, Xiuzhen; Fu, Feng; You, Fusheng; Shi, Xuetao; Ji, Zhenyu; Wang, Kan

    2007-07-01

    Electrical impedance scanning (EIS) is a kind of potential bio-impedance measurement technology, especially aiding the diagnosis of breast cancer in women. By changing the frequency of the driving signal in turn while keeping the other conditions stable, multi-frequency measurement results on the object can be obtained. According to the least square method and circuit theory, the parameters in two models are deduced when measured with data at multiple driving frequencies. The arcs, in the real and imaginary parts of a trans-admittance coordinate, made by the evaluated parameters fit well the realistic data measured by our EIS device on female subjects. The Cole-Cole model in the form of admittance is closer to the measured data than the three-element model. Based on the evaluation of the multi-frequency parameters, we presented parameter mapping of EIS using two kinds of circuit model: one is the three-element model in the form of admittance and the other is the Cole-Cole model in the form of admittance. Comparing with classical admittance mapping at a single frequency, the multi-frequency parameter mapping will provide a novel vision to study EIS. The multi-frequency approach can provide the mappings of four parameters, which is helpful to identify different diseases with a similar characteristic in classical EIS mapping. From plots of the real and imaginary parts of the admittance, it is easy to make sure whether there exists abnormal tissue.

  1. Decoding natural images from evoked brain activities using encoding models with invertible mapping.

    PubMed

    Li, Chao; Xu, Junhai; Liu, Baolin

    2018-05-21

    Recent studies have built encoding models in the early visual cortex, and reliable mappings have been made between the low-level visual features of stimuli and brain activities. However, these mappings are irreversible, so that the features cannot be directly decoded. To solve this problem, we designed a sparse framework-based encoding model that predicted brain activities from a complete feature representation. Moreover, according to the distribution and activation rules of neurons in the primary visual cortex (V1), three key transformations were introduced into the basic feature to improve the model performance. In this setting, the mapping was simple enough that it could be inverted using a closed-form formula. Using this mapping, we designed a hybrid identification method based on the support vector machine (SVM), and tested it on a published functional magnetic resonance imaging (fMRI) dataset. The experiments confirmed the rationality of our encoding model, and the identification accuracies for 2 subjects increased from 92% and 72% to 98% and 92% with the chance level only 0.8%. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  3. Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe

    NASA Astrophysics Data System (ADS)

    Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David

    2018-06-01

    An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.

  4. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2016-01-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089

  5. Cooperative Autonomous Observation of Coherent Atmospheric Structures using Small Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2014-12-01

    Mapping the structure of localized atmospheric phenomena, from sea breeze and shallow cumuli to thunderstorms and hurricanes, is of scientific interest. Low-cost small unmanned aircraft systems (sUAS) open the possibility for autonomous "instruments" to map important small-scale phenomena (kilometers, hours) and serve as a testbed for for much larger scales. Localized phenomena viewed as coherent structures interacting with their large-scale environment are difficult to map. As simple simulations show, naive Eulerian or Lagrangian strategies can fail in mapping localized phenomena. Model-based techniques are needed. Meteorological targeting, where supplementary UAS measurements additionally constrain numerical models is promising, but may require many primary measurements to be successful. We propose a new, data-driven, field-operable, cooperative autonomous observing system (CAOS) framework. A remote observer (on a UAS) tracks tracers to identify an apparent motion model over short timescales. Motion-based predictions seed MCMC flight plans for other UAS to gather in-situ data, which is fused with the remote measurements to produce maps. The tracking and mapping cycles repeat, and maps can be assimilated into numerical models for longer term forecasting. CAOS has been applied to study small scale emissions. At Popocatepetl, in collaboration with CENAPRED and IPN, it is being applied map the plume using remote IR/UV UAS and in-situ SO2 sensing, with additional plans for water vapor, the electric field and ash. The combination of sUAS with autonomy appears to be highly promising methodology for environmental mapping. For more information, please visit http://caos.mit.edu

  6. An optimal strategy for functional mapping of dynamic trait loci.

    PubMed

    Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling

    2010-02-01

    As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.

  7. Mental maps and travel behaviour: meanings and models

    NASA Astrophysics Data System (ADS)

    Hannes, Els; Kusumastuti, Diana; Espinosa, Maikel León; Janssens, Davy; Vanhoof, Koen; Wets, Geert

    2012-04-01

    In this paper, the " mental map" concept is positioned with regard to individual travel behaviour to start with. Based on Ogden and Richards' triangle of meaning (The meaning of meaning: a study of the influence of language upon thought and of the science of symbolism. International library of psychology, philosophy and scientific method. Routledge and Kegan Paul, London, 1966) distinct thoughts, referents and symbols originating from different scientific disciplines are identified and explained in order to clear up the notion's fuzziness. Next, the use of this concept in two major areas of research relevant to travel demand modelling is indicated and discussed in detail: spatial cognition and decision-making. The relevance of these constructs to understand and model individual travel behaviour is explained and current research efforts to implement these concepts in travel demand models are addressed. Furthermore, these mental map notions are specified in two types of computational models, i.e. a Bayesian Inference Network (BIN) and a Fuzzy Cognitive Map (FCM). Both models are explained, and a numerical and a real-life example are provided. Both approaches yield a detailed quantitative representation of the mental map of decision-making problems in travel behaviour.

  8. Extensible Ontological Modeling Framefork for Subject Mediation

    NASA Astrophysics Data System (ADS)

    Kalinichenko, L. A.; Skvortsov, N. A.

    An approach for extensible ontological model construction in a mediation environment intended for heterogeneous information sources integration in various subject domains is presented. A mediator ontological language (MOL) may depend on a subject domain and is to be defined at the mediator consolidation phase. On the other hand, for different information sources different ontological models (languages) can be used to define their own ontologies. Reversible mapping of the source ontological models into MOL is needed for information sources registration at the mediator. An approach for such reversible mapping is demonstrated for a class of the Web information sources. It is assumed that such sources apply the DAML+OIL ontological model. A subset of the hybrid object-oriented and semi-structured canonical mediator data model is used for the core of MOL. Construction of a reversible mapping of DAML+OIL into an extension of the core of MOL is presented in the paper. Such mapping is a necessary pre-requisite for contextualizing and registration of information sources at the mediator. The mapping shows how extensible MOL can be constructed. The approach proposed is oriented on digital libraries where retrieval is focused on information content, rather than on information entities.

  9. The use of error-category mapping in pharmacokinetic model analysis of dynamic contrast-enhanced MRI data.

    PubMed

    Gill, Andrew B; Anandappa, Gayathri; Patterson, Andrew J; Priest, Andrew N; Graves, Martin J; Janowitz, Tobias; Jodrell, Duncan I; Eisen, Tim; Lomas, David J

    2015-02-01

    This study introduces the use of 'error-category mapping' in the interpretation of pharmacokinetic (PK) model parameter results derived from dynamic contrast-enhanced (DCE-) MRI data. Eleven patients with metastatic renal cell carcinoma were enrolled in a multiparametric study of the treatment effects of bevacizumab. For the purposes of the present analysis, DCE-MRI data from two identical pre-treatment examinations were analysed by application of the extended Tofts model (eTM), using in turn a model arterial input function (AIF), an individually-measured AIF and a sample-average AIF. PK model parameter maps were calculated. Errors in the signal-to-gadolinium concentration ([Gd]) conversion process and the model-fitting process itself were assigned to category codes on a voxel-by-voxel basis, thereby forming a colour-coded 'error-category map' for each imaged slice. These maps were found to be repeatable between patient visits and showed that the eTM converged adequately in the majority of voxels in all the tumours studied. However, the maps also clearly indicated sub-regions of low Gd uptake and of non-convergence of the model in nearly all tumours. The non-physical condition ve ≥ 1 was the most frequently indicated error category and appeared sensitive to the form of AIF used. This simple method for visualisation of errors in DCE-MRI could be used as a routine quality-control technique and also has the potential to reveal otherwise hidden patterns of failure in PK model applications. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Landslide susceptibility mapping using frequency ratio, logistic regression, artificial neural networks and their comparison: A case study from Kat landslides (Tokat—Turkey)

    NASA Astrophysics Data System (ADS)

    Yilmaz, Işık

    2009-06-01

    The purpose of this study is to compare the landslide susceptibility mapping methods of frequency ratio (FR), logistic regression and artificial neural networks (ANN) applied in the Kat County (Tokat—Turkey). Digital elevation model (DEM) was first constructed using GIS software. Landslide-related factors such as geology, faults, drainage system, topographical elevation, slope angle, slope aspect, topographic wetness index (TWI) and stream power index (SPI) were used in the landslide susceptibility analyses. Landslide susceptibility maps were produced from the frequency ratio, logistic regression and neural networks models, and they were then compared by means of their validations. The higher accuracies of the susceptibility maps for all three models were obtained from the comparison of the landslide susceptibility maps with the known landslide locations. However, respective area under curve (AUC) values of 0.826, 0.842 and 0.852 for frequency ratio, logistic regression and artificial neural networks showed that the map obtained from ANN model is more accurate than the other models, accuracies of all models can be evaluated relatively similar. The results obtained in this study also showed that the frequency ratio model can be used as a simple tool in assessment of landslide susceptibility when a sufficient number of data were obtained. Input process, calculations and output process are very simple and can be readily understood in the frequency ratio model, however logistic regression and neural networks require the conversion of data to ASCII or other formats. Moreover, it is also very hard to process the large amount of data in the statistical package.

  11. Text Mapping Plus: Improving Comprehension through Supported Retellings

    ERIC Educational Resources Information Center

    Lapp, Diane; Fisher, Douglas; Johnson, Kelly

    2010-01-01

    Modeled in this column is the teaching of a text mapping routine that supports students reading and remembering the salient features of the text. The authors renamed the story mapping technique "text mapping plus" because they found that as students added relational words and graphics to their maps their retells of both fiction and nonnarrative…

  12. A Numerical Study of New Logistic Map

    NASA Astrophysics Data System (ADS)

    Khmou, Youssef

    In this paper, we propose a new logistic map based on the relation of the information entropy, we study the bifurcation diagram comparatively to the standard logistic map. In the first part, we compare the obtained diagram, by numerical simulations, with that of the standard logistic map. It is found that the structures of both diagrams are similar where the range of the growth parameter is restricted to the interval [0,e]. In the second part, we present an application of the proposed map in traffic flow using macroscopic model. It is found that the bifurcation diagram is an exact model of the Greenberg’s model of traffic flow where the growth parameter corresponds to the optimal velocity and the random sequence corresponds to the density. In the last part, we present a second possible application of the proposed map which consists of random number generation. The results of the analysis show that the excluded initial values of the sequences are (0,1).

  13. Mapping flood hazards under uncertainty through probabilistic flood inundation maps

    NASA Astrophysics Data System (ADS)

    Stephens, T.; Bledsoe, B. P.; Miller, A. J.; Lee, G.

    2017-12-01

    Changing precipitation, rapid urbanization, and population growth interact to create unprecedented challenges for flood mitigation and management. Standard methods for estimating risk from flood inundation maps generally involve simulations of floodplain hydraulics for an established regulatory discharge of specified frequency. Hydraulic model results are then geospatially mapped and depicted as a discrete boundary of flood extents and a binary representation of the probability of inundation (in or out) that is assumed constant over a project's lifetime. Consequently, existing methods utilized to define flood hazards and assess risk management are hindered by deterministic approaches that assume stationarity in a nonstationary world, failing to account for spatio-temporal variability of climate and land use as they translate to hydraulic models. This presentation outlines novel techniques for portraying flood hazards and the results of multiple flood inundation maps spanning hydroclimatic regions. Flood inundation maps generated through modeling of floodplain hydraulics are probabilistic reflecting uncertainty quantified through Monte-Carlo analyses of model inputs and parameters under current and future scenarios. The likelihood of inundation and range of variability in flood extents resulting from Monte-Carlo simulations are then compared with deterministic evaluations of flood hazards from current regulatory flood hazard maps. By facilitating alternative approaches of portraying flood hazards, the novel techniques described in this presentation can contribute to a shifting paradigm in flood management that acknowledges the inherent uncertainty in model estimates and the nonstationary behavior of land use and climate.

  14. Fractional Order Spatiotemporal Chaos with Delay in Spatial Nonlinear Coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Yingqian; Wang, Xingyuan; Liu, Liyan; Liu, Jia

    We investigate the spatiotemporal dynamics with fractional order differential logistic map with delay under nonlinear chaotic maps for spatial coupling connections. Here, the coupling methods between lattices are the nonlinear chaotic map coupling of lattices. The fractional order differential logistic map with delay breaks the limits of the range of parameter μ ∈ [3.75, 4] in the classical logistic map for chaotic states. The Kolmogorov-Sinai entropy density and universality, and bifurcation diagrams are employed to investigate the chaotic behaviors of the proposed model in this paper. The proposed model can also be applied for cryptography, which is verified in a color image encryption scheme in this paper.

  15. Hydraulic model and flood-inundation maps developed for the Pee Dee National Wildlife Refuge, North Carolina

    USGS Publications Warehouse

    Smith, Douglas G.; Wagner, Chad R.

    2016-04-08

    A series of digital flood-inundation maps were developed on the basis of the water-surface profiles produced by the model. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Program Web site at http://water.usgs.gov/osw/flood_inundation, depict estimates of the areal extent and depth of flooding corresponding to selected water levels at the USGS streamgage Pee Dee River at Pee Dee Refuge near Ansonville, N.C. These maps, when combined with real-time water-level information from USGS streamgages, provide managers with critical information to help plan flood-response activities and resource protection efforts.

  16. Quantum vertex model for reversible classical computing.

    PubMed

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  17. Quantum vertex model for reversible classical computing

    NASA Astrophysics Data System (ADS)

    Chamon, C.; Mucciolo, E. R.; Ruckenstein, A. E.; Yang, Z.-C.

    2017-05-01

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without `learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  18. The evolution of mapping habitat for northern spotted owls (Strix occidentalis caurina): A comparison of photo-interpreted, Landsat-based, and lidar-based habitat maps

    Treesearch

    Steven H. Ackers; Raymond J. Davis; Keith A. Olsen; Katie M. Dugger

    2015-01-01

    Wildlife habitat mapping has evolved at a rapid pace over the last fewdecades. Beginning with simple, often subjective, hand-drawn maps, habitat mapping now involves complex species distribution models (SDMs) using mapped predictor variables derived from remotely sensed data. For species that inhabit large geographic areas, remote sensing technology is often...

  19. Color reproduction system based on color appearance model and gamut mapping

    NASA Astrophysics Data System (ADS)

    Cheng, Fang-Hsuan; Yang, Chih-Yuan

    2000-06-01

    By the progress of computer, computer peripherals such as color monitor and printer are often used to generate color image. However, cross media color reproduction by human perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human psychology. In this thesis, a color reproduction system based on color appearance model and gamut mapping is proposed. It consists of four parts; device characterization, color management technique, color appearance model and gamut mapping.

  20. Strategic Plan: Initiating an Orthopaedic Residency at Womack Army Medical Center

    DTIC Science & Technology

    2006-06-07

    outlining WAMC’s strategy: analysis of Porter’s Five Forces Model; a Strategic Map for discovering competitive advantages and disadvantages ; identifying a...Figure 6. Strategic Map of Advantages and Disadvantages ............................... 27 Figure 7. Directional Strategy...analysis; analysis of Porter’s Five Forces Model; a strategic map for discovering competitive advantages and disadvantages ; identifying a directional

  1. A case study for the integration of predictive mineral potential maps

    NASA Astrophysics Data System (ADS)

    Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye

    2014-09-01

    This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.

  2. Cognitive Mapping Based on Conjunctive Representations of Space and Movement

    PubMed Central

    Zeng, Taiping; Si, Bailu

    2017-01-01

    It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments. PMID:29213234

  3. Remote Sensing Sensors and Applications in Environmental Resources Mapping and Modelling

    PubMed Central

    Melesse, Assefa M.; Weng, Qihao; S.Thenkabail, Prasad; Senay, Gabriel B.

    2007-01-01

    The history of remote sensing and development of different sensors for environmental and natural resources mapping and data acquisition is reviewed and reported. Application examples in urban studies, hydrological modeling such as land-cover and floodplain mapping, fractional vegetation cover and impervious surface area mapping, surface energy flux and micro-topography correlation studies is discussed. The review also discusses the use of remotely sensed-based rainfall and potential evapotranspiration for estimating crop water requirement satisfaction index and hence provides early warning information for growers. The review is not an exhaustive application of the remote sensing techniques rather a summary of some important applications in environmental studies and modeling. PMID:28903290

  4. Posture Affects How Robots and Infants Map Words to Objects

    PubMed Central

    Morse, Anthony F.; Benitez, Viridian L.; Belpaeme, Tony; Cangelosi, Angelo; Smith, Linda B.

    2015-01-01

    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space. PMID:25785834

  5. Surface wave tomography of North America and the Caribbean using global and regional broad-band networks: Phase velocity maps and limitations of ray theory

    USGS Publications Warehouse

    Godey, S.; Snieder, R.; Villasenor, A.; Benz, H.M.

    2003-01-01

    We present phase velocity maps of fundamental mode Rayleigh waves across the North American and Caribbean plates. Our data set consists of 1846 waveforms from 172 events recorded at 91 broad-band stations operating in North America. We compute phase velocity maps in four narrow period bands between 50 and 150 s using a non-linear waveform inversion method that solves for phase velocity perturbations relative to a reference Earth model (PREM). Our results show a strong velocity contrast between high velocities beneath the stable North American craton, and lower velocities in the tectonically active western margin, in agreement with other regional and global surface wave tomography studies. We perform detailed comparisons with global model results, which display good agreement between phase velocity maps in the location and amplitude of the anomalies. However, forward modelling shows that regional maps are more accurate for predicting waveforms. In addition, at long periods, the amplitude of the velocity anomalies imaged in our regional phase velocity maps is three time larger than in global phase velocity models. This amplitude factor is necessary to explain the data accurately, showing that regional models provide a better image of velocity structures. Synthetic tests show that the raypath coverage used in this study enables one to resolve velocity features of the order of 800-1000 km. However, only larger length-scale features are observed in the phase velocity maps. The limitation in resolution of our maps can be attributed to the wave propagation theory used in the inversion. Ray theory does not account for off-great-circle ray propagation effects, such as ray bending or scattering. For wavelengths less than 1000 km, scattering effects are significant and may need to be considered.

  6. Large-extent digital soil mapping approaches for total soil depth

    NASA Astrophysics Data System (ADS)

    Mulder, Titia; Lacoste, Marine; Saby, Nicolas P. A.; Arrouays, Dominique

    2015-04-01

    Total soil depth (SDt) plays a key role in supporting various ecosystem services and properties, including plant growth, water availability and carbon stocks. Therefore, predictive mapping of SDt has been included as one of the deliverables within the GlobalSoilMap project. In this work SDt was predicted for France following the directions of GlobalSoilMap, which requires modelling at 90m resolution. This first method, further referred to as DM, consisted of modelling the deterministic trend in SDt using data mining, followed by a bias correction and ordinary kriging of the residuals. Considering the total surface area of France, being about 540K km2, employed methods may need to be able dealing with large data sets. Therefore, a second method, multi-resolution kriging (MrK) for large datasets, was implemented. This method consisted of modelling the deterministic trend by a linear model, followed by interpolation of the residuals. For the two methods, the general trend was assumed to be explained by the biotic and abiotic environmental conditions, as described by the Soil-Landscape paradigm. The mapping accuracy was evaluated by an internal validation and its concordance with previous soil maps. In addition, the prediction interval for DM and the confidence interval for MrK were determined. Finally, the opportunities and limitations of both approaches were evaluated. The results showed consistency in mapped spatial patterns and a good prediction of the mean values. DM was better capable in predicting extreme values due to the bias correction. Also, DM was more powerful in capturing the deterministic trend than the linear model of the MrK approach. However, MrK was found to be more straightforward and flexible in delivering spatial explicit uncertainty measures. The validation indicated that DM was more accurate than MrK. Improvements for DM may be expected by predicting soil depth classes. MrK shows potential for modelling beyond the country level, at high resolution. Large-extent digital soil mapping approaches for SDt may be improved by (1) taking into account SDt observations which are censored and (2) using high-resolution biotic and abiotic environmental data. The latter may improve modelling the soil-landscape interactions influencing soil pedogenesis. Concluding, this work provided a robust and reproducible method (DM) for high-resolution soil property modelling, in accordance with the GlobalSoilMap requirements and an efficient alternative for large-extent digital soil mapping (MrK).

  7. Externalising Students' Mental Models through Concept Maps

    ERIC Educational Resources Information Center

    Chang, Shu-Nu

    2007-01-01

    The purpose of this study is to use concept maps as an "expressed model" to investigate students' mental models regarding the homeostasis of blood sugar. The difficulties in learning the concept of homeostasis and in probing mental models have been revealed in many studies. Homeostasis of blood sugar is one of the themes in junior high…

  8. Map as a Service: A Framework for Visualising and Maximising Information Return from Multi-Modal Wireless Sensor Networks

    PubMed Central

    Hammoudeh, Mohammad; Newman, Robert; Dennett, Christopher; Mount, Sarah; Aldabbas, Omar

    2015-01-01

    This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service. PMID:26378539

  9. Coupling high-resolution hydraulic and hydrologic models for flash flood forecasting and inundation mapping in urban areas - A case study for the City of Fort Worth

    NASA Astrophysics Data System (ADS)

    Nazari, B.; Seo, D.; Cannon, A.

    2013-12-01

    With many diverse features such as channels, pipes, culverts, buildings, etc., hydraulic modeling in urban areas for inundation mapping poses significant challenges. Identifying the practical extent of the details to be modeled in order to obtain sufficiently accurate results in a timely manner for effective emergency management is one of them. In this study we assess the tradeoffs between model complexity vs. information content for decision making in applying high-resolution hydrologic and hydraulic models for real-time flash flood forecasting and inundation mapping in urban areas. In a large urban area such as the Dallas-Fort Worth Metroplex (DFW), there exists very large spatial variability in imperviousness depending on the area of interest. As such, one may expect significant sensitivity of hydraulic model results to the resolution and accuracy of hydrologic models. In this work, we present the initial results from coupling of high-resolution hydrologic and hydraulic models for two 'hot spots' within the City of Fort Worth for real-time inundation mapping.

  10. How well can we test probabilistic seismic hazard maps?

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Stein, Seth; Camelbeeck, Thierry; Vleminckx, Bart

    2017-04-01

    Recent large earthquakes that gave rise to shaking much stronger than shown in probabilistic seismic hazard (PSH) maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSH model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating shaking histories for an area with assumed uniform distribution of earthquakes, Gutenberg-Richter magnitude-frequency relation, Poisson temporal occurrence model, and ground-motion prediction equation (GMPE). We compare the maximum simulated shaking at many sites over time with that predicted by a hazard map generated for the same set of parameters. The Poisson model predicts that the fraction of sites at which shaking will exceed that of the hazard map is p = 1 - exp(-t/T), where t is the duration of observations and T is the map's return period. Exceedance is typically associated with infrequent large earthquakes, as observed in real cases. The ensemble of simulated earthquake histories yields distributions of fractional exceedance with mean equal to the predicted value. Hence, the PSH algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. However, simulated fractional exceedances show a large scatter about the mean value that decreases with increasing t/T, increasing observation time and increasing Gutenberg-Richter a-value (combining intrinsic activity rate and surface area), but is independent of GMPE uncertainty. This scatter is due to the variability of earthquake recurrence, and so decreases as the largest earthquakes occur in more simulations. Our results are important for evaluating the performance of a hazard map based on misfits in fractional exceedance, and for assessing whether such misfit arises by chance or reflects a bias in the map. More specifically, we determined for a broad range of Gutenberg-Richter a-values theoretical confidence intervals on allowed misfits in fractional exceedance and on the percentage of hazard-map bias that can thus be detected by comparison with observed shaking histories. Given that in the real world we only have one shaking history for an area, these results indicate that even if a hazard map does not fit the observations, it is very difficult to assess its veracity, especially for low-to-moderate-seismicity regions. Because our model is a simplified version of reality, any additional uncertainty or complexity will tend to widen these confidence intervals.

  11. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  12. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  13. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  14. Multitask saliency detection model for synthetic aperture radar (SAR) image and its application in SAR and optical image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Chunhui; Zhang, Duona; Zhao, Xintao

    2018-03-01

    Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.

  15. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  16. Low Resolution Refinement of Atomic Models Against Crystallographic Data.

    PubMed

    Nicholls, Robert A; Kovalevskiy, Oleg; Murshudov, Garib N

    2017-01-01

    This review describes some of the problems encountered during low-resolution refinement and map calculation. Refinement is considered as an application of Bayes' theorem, allowing combination of information from various sources including crystallographic experimental data and prior chemical and structural knowledge. The sources of prior knowledge relevant to macromolecules include basic chemical information such as bonds and angles, structural information from reference models of known homologs, knowledge about secondary structures, hydrogen bonding patterns, and similarity of non-crystallographically related copies of a molecule. Additionally, prior information encapsulating local conformational conservation is exploited, keeping local interatomic distances similar to those in the starting atomic model. The importance of designing an accurate likelihood function-the only link between model parameters and observed data-is emphasized. The review also reemphasizes the importance of phases, and describes how the use of raw observed amplitudes could give a better correlation between the calculated and "true" maps. It is shown that very noisy or absent observations can be replaced by calculated structure factors, weighted according to the accuracy of the atomic model. This approach helps to smoothen the map. However, such replacement should be used sparingly, as the bias toward errors in the model could be too much to avoid. It is in general recommended that, whenever a new map is calculated, map quality should be judged by inspection of the parts of the map where there is no atomic model. It is also noted that it is advisable to work with multiple blurred and sharpened maps, as different parts of a crystal may exhibit different degrees of mobility. Doing so can allow accurate building of atomic models, accounting for overall shape as well as finer structural details. Some of the results described in this review have been implemented in the programs REFMAC5, ProSMART and LORESTR, which are available as part of the CCP4 software suite.

  17. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  18. Unsupervised Domain Adaptation with Multiple Acoustic Models

    DTIC Science & Technology

    2010-12-01

    Discriminative MAP Adaptation Standard ML-MAP has been extended to incorporate discrim- inative training criteria such as MMI and MPE [10]. Dis- criminative MAP...smoothing variable I . For example, the MMI - MAP mean is given by ( mmi -map) jm = fnumjm (O) den jm(O)g+Djm̂jm + I (ml-map) jm f numjm den... MMI training, and Djm is the Gaussian-dependent parameter for the extended Baum-Welch (EBW) algorithm. MMI -MAP has been successfully applied in

  19. Mining Concept Maps to Understand University Students' Learning

    ERIC Educational Resources Information Center

    Yoo, Jin Soung; Cho, Moon-Heum

    2012-01-01

    Concept maps, visual representations of knowledge, are used in an educational context as a way to represent students' knowledge, and identify mental models of students; however there is a limitation of using concept mapping due to its difficulty to evaluate the concept maps. A concept map has a complex structure which is composed of concepts and…

  20. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  1. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. 18F-FLT uptake kinetics in head and neck squamous cell carcinoma: a PET imaging study.

    PubMed

    Liu, Dan; Chalkidou, Anastasia; Landau, David B; Marsden, Paul K; Fenwick, John D

    2014-04-01

    To analyze the kinetics of 3(')-deoxy-3(')-[F-18]-fluorothymidine (18F-FLT) uptake by head and neck squamous cell carcinomas and involved nodes imaged using positron emission tomography (PET). Two- and three-tissue compartment models were fitted to 12 tumor time-activity-curves (TACs) obtained for 6 structures (tumors or involved nodes) imaged in ten dynamic PET studies of 1 h duration, carried out for five patients. The ability of the models to describe the data was assessed using a runs test, the Akaike information criterion (AIC) and leave-one-out cross-validation. To generate parametric maps the models were also fitted to TACs of individual voxels. Correlations between maps of different parameters were characterized using Pearson'sr coefficient; in particular the phosphorylation rate-constants k3-2tiss and k5 of the two- and three-tissue models were studied alongside the flux parameters KFLT- 2tiss and KFLT of these models, and standardized uptake values (SUV). A methodology based on expectation-maximization clustering and the Bayesian information criterion ("EM-BIC clustering") was used to distil the information from noisy parametric images. Fits of two-tissue models 2C3K and 2C4K and three-tissue models 3C5K and 3C6K comprising three, four, five, and six rate-constants, respectively, pass the runs test for 4, 8, 10, and 11 of 12 tumor TACs. The three-tissue models have lower AIC and cross-validation scores for nine of the 12 tumors. Overall the 3C6K model has the lowest AIC and cross-validation scores and its fitted parameter values are of the same orders of magnitude as literature estimates. Maps of KFLT and KFLT- 2tiss are strongly correlated (r = 0.85) and also correlate closely with SUV maps (r = 0.72 for KFLT- 2tiss, 0.64 for KFLT). Phosphorylation rate-constant maps are moderately correlated with flux maps (r = 0.48 for k3-2tiss vs KFLT- 2tiss and r = 0.68 for k5 vs KFLT); however, neither phosphorylation rate-constant correlates significantly with SUV. EM-BIC clustering reduces the parametric maps to a small number of levels--on average 5.8, 3.5, 3.4, and 1.4 for KFLT- 2tiss, KFLT, k3-2tiss, and k5. This large simplification is potentially useful for radiotherapy dose-painting, but demonstrates the high noise in some maps. Statistical simulations show that voxel level noise degrades TACs generated from the 3C6K model sufficiently that the average AIC score, parameter bias, and total uncertainty of 2C4K model fits are similar to those of 3C6K fits, whereas at the whole tumor level the scores are lower for 3C6K fits. For the patients studied here, whole tumor FLT uptake time-courses are represented better overall by a three-tissue than by a two-tissue model. EM-BIC clustering simplifies noisy parametric maps, providing the best description of the underlying information they contain and is potentially useful for radiotherapy dose-painting. However, the clustering highlights the large degree of noise present in maps of the phosphorylation rate-constantsk5 and k3-2tiss, which are conceptually tightly linked to cellular proliferation. Methods must be found to make these maps more robust-either by constraining other model parameters or modifying dynamic imaging protocols. © 2014 American Association of Physicists in Medicine.

  3. T2* Mapping Provides Information That Is Statistically Comparable to an Arthroscopic Evaluation of Acetabular Cartilage.

    PubMed

    Morgan, Patrick; Nissi, Mikko J; Hughes, John; Mortazavi, Shabnam; Ellerman, Jutta

    2017-07-01

    Objectives The purpose of this study was to validate T2* mapping as an objective, noninvasive method for the prediction of acetabular cartilage damage. Methods This is the second step in the validation of T2*. In a previous study, we established a quantitative predictive model for identifying and grading acetabular cartilage damage. In this study, the model was applied to a second cohort of 27 consecutive hips to validate the model. A clinical 3.0-T imaging protocol with T2* mapping was used. Acetabular regions of interest (ROI) were identified on magnetic resonance and graded using the previously established model. Each ROI was then graded in a blinded fashion by arthroscopy. Accurate surgical location of ROIs was facilitated with a 2-dimensional map projection of the acetabulum. A total of 459 ROIs were studied. Results When T2* mapping and arthroscopic assessment were compared, 82% of ROIs were within 1 Beck group (of a total 6 possible) and 32% of ROIs were classified identically. Disease prediction based on receiver operating characteristic curve analysis demonstrated a sensitivity of 0.713 and a specificity of 0.804. Model stability evaluation required no significant changes to the predictive model produced in the initial study. Conclusions These results validate that T2* mapping provides statistically comparable information regarding acetabular cartilage when compared to arthroscopy. In contrast to arthroscopy, T2* mapping is quantitative, noninvasive, and can be used in follow-up. Unlike research quantitative magnetic resonance protocols, T2* takes little time and does not require a contrast agent. This may facilitate its use in the clinical sphere.

  4. The importance of explicitly mapping instructional analogies in science education

    NASA Astrophysics Data System (ADS)

    Asay, Loretta Johnson

    Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.

  5. Implementation of landslide susceptibility maps in Lower Austria as part of risk governance

    NASA Astrophysics Data System (ADS)

    Bell, Rainer; Petschko, Helene; Bauer, Christian; Glade, Thomas; Granica, Klaus; Heiss, Gerhard; Leopold, Philip; Pomaroli, Gilbert; Proske, Herwig; Schweigl, Joachim

    2013-04-01

    Landslides frequently cause damage to agricultural land and infrastructure in Lower Austria - a province of Austria. Also settlements and people are threatened by landslides. To reduce landslide risks and to prevent the establishment of new settlements in highly landslide prone areas, the project "MoNOE" (Method development for landslide susceptibility modeling in Lower Austria) was set up by the provincial government. The main aim of the project is the development of methods to model rock fall and slide susceptibility for an area of approx. 15,900 km2 and to implement the resulting susceptibility maps into the spatial planning strategies of the state. Right from the beginning of the project a close cooperation between the involved scientists and the stakeholders from the Geological Survey of Lower Austria and the Department of Spatial Planning and Regional Policy of Lower Austria was established to ensure that method development and final susceptibility maps meet exactly the needs and demands of the stakeholders. This posed huge challenges, together with its realization in the large study area and a (heterogeneous) complex geological situation,. Limitations were given by restricted data availability (e.g. for geology or landslide inventories) in such a large study area. Rock fall susceptibility was modeled by a combined approach of determining rock fall release areas by empirical slope thresholds (dependent on geology) followed by empirical run-out modeling. Slide susceptibility was modeled based on the statistical approaches of weights of evidence (WofE) and generalized additive models (GAM) by two different research groups. Huge efforts were spent on the validation of all susceptibility models. In a later stage of the project we found that the best scientific maps are not necessarily the best maps to be implemented in spatial planning strategies. Thus, in close cooperation with the stakeholders, decisions had to be taken to find the best resolution of the maps, the number of susceptibility classes, their colour and naming, as well as on the instructions for actions referring to each susceptibility class respectively. All susceptibility maps showed very good validation results. Both, the WofE and the GAM slide susceptibility map showed high median AUROC values of 0.9 and the geomorphological plausibility proved to be very good in both cases. Due to these results it was concluded the stakeholders should take the decision which of the two slide susceptibility maps should be used. This decision was performed as a blind test providing resulting maps and their respective performance measures but coded with a color so that the stakeholders did not know which maps were produced by whom and with which method. This presentation is thus focusing on a detailed description of all these aspects and it is discussed how this participative approach led to a high acceptance of the final landslide susceptibility maps by the stakeholders. Consequently these maps are going to be implemented in the spatial planning strategies soon.

  6. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  7. Application of a GIS-/remote sensing-based approach for predicting groundwater potential zones using a multi-criteria data mining methodology.

    PubMed

    Mogaji, Kehinde Anthony; Lim, Hwee San

    2017-07-01

    This study integrates the application of Dempster-Shafer-driven evidential belief function (DS-EBF) methodology with remote sensing and geographic information system techniques to analyze surface and subsurface data sets for the spatial prediction of groundwater potential in Perak Province, Malaysia. The study used additional data obtained from the records of the groundwater yield rate of approximately 28 bore well locations. The processed surface and subsurface data produced sets of groundwater potential conditioning factors (GPCFs) from which multiple surface hydrologic and subsurface hydrogeologic parameter thematic maps were generated. The bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training to 30% (9 wells) for model testing. Application results of the DS-EBF relationship model algorithms of the surface- and subsurface-based GPCF thematic maps and the bore well locations produced two groundwater potential prediction (GPP) maps based on surface hydrologic and subsurface hydrogeologic characteristics which established that more than 60% of the study area falling within the moderate-high groundwater potential zones and less than 35% falling within the low potential zones. The estimated uncertainty values within the range of 0 to 17% for the predicted potential zones were quantified using the uncertainty algorithm of the model. The validation results of the GPP maps using relative operating characteristic curve method yielded 80 and 68% success rates and 89 and 53% prediction rates for the subsurface hydrogeologic factor (SUHF)- and surface hydrologic factor (SHF)-based GPP maps, respectively. The study results revealed that the SUHF-based GPP map accurately delineated groundwater potential zones better than the SHF-based GPP map. However, significant information on the low degree of uncertainty of the predicted potential zones established the suitability of the two GPP maps for future development of groundwater resources in the area. The overall results proved the efficacy of the data mining model and the geospatial technology in groundwater potential mapping.

  8. How Accurately Can We Map SEP Observations Using L*?

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Kress, B. T.

    2016-12-01

    In a dipole the cutoff rigidities at a given location are inversely proportional to L2. Smart and Shea, 1967 showed that this was approximately true at low altitudes using the McIlwain L parameter (Lm) in realistic magnetospheric models and provided heuristic evidence that it was also true at high altitudes. Later models developed by Smart and Shea and others (Ogliore et al., 2001, Neal et al., 2013, Selesnick et al., 2015) also use this relationship at low altitudes. Only the Smart and Shea model (Smart and Shea, 2006) uses this relationship to extrapolate to high altitudes, but they introduce a correction that yields a 1 MeV proton vertical cutoff at geosynchronous. Recent work mapped POES observations to the Van Allen Probes locations as a function of L* (Young et al., 2015). The comparison between mapped and observed was reasonably good, but this mapping was along L* and only attempted to account for differences in shielding between high and low latitude. No attempt was made to map across L* so the inverse squared relationship was not tested. These previous results suggest that L* may be useful for mapping flux observations between satellites at high altitudes. In this study we calculate cutoffs and L* shells in a Tsyganenko 2005 + IGRF magnetic field model to examine how accurately L* based mapping can be used in different regions of the magnetosphere.

  9. Animated axial surface mapping: The multimedia companion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hook, S.C.; Shaw, J.H.; Suppe, J.

    1995-09-01

    This newly expanded version of AAPG`s first DataShare Disk brings to life the concepts and applications of a new method of structural trend analysis. Through the dynamic use of color, sound, animation, and humor, this multimedia companion to the May 1994 article on Axial Surface Mapping introduces the reader (or viewer) to the concepts of rigid-block translation, fault-bend folding, and axial surface mapping. Animated models of growing fault-bend folds allow the viewer to see in four dimensions. The axial surface map shows the horizontal plane; the folding lines show depth planes; and the animations show the structure and its two-dimensionalmore » map changing with time and increasing slip. The animations create theoretical map patterns under varying, but controlled conditions that can be compared to axial surface maps from real data. The model patterns are then used to interpret seismic data and axial surface maps from a producing gas field in offshore California and from an exploration play in Pennsylvania.« less

  10. A self-trained classification technique for producing 30 m percent-water maps from Landsat data

    USGS Publications Warehouse

    Rover, Jennifer R.; Wylie, Bruce K.; Ji, Lei

    2010-01-01

    Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.

  11. Flood-hazard mapping in Honduras in response to Hurricane Mitch

    USGS Publications Warehouse

    Mastin, M.C.

    2002-01-01

    The devastation in Honduras due to flooding from Hurricane Mitch in 1998 prompted the U.S. Agency for International Development, through the U.S. Geological Survey, to develop a country-wide systematic approach of flood-hazard mapping and a demonstration of the method at selected sites as part of a reconstruction effort. The design discharge chosen for flood-hazard mapping was the flood with an average return interval of 50 years, and this selection was based on discussions with the U.S. Agency for International Development and the Honduran Public Works and Transportation Ministry. A regression equation for estimating the 50-year flood discharge using drainage area and annual precipitation as the explanatory variables was developed, based on data from 34 long-term gaging sites. This equation, which has a standard error of prediction of 71.3 percent, was used in a geographic information system to estimate the 50-year flood discharge at any location for any river in the country. The flood-hazard mapping method was demonstrated at 15 selected municipalities. High-resolution digital-elevation models of the floodplain were obtained using an airborne laser-terrain mapping system. Field verification of the digital elevation models showed that the digital-elevation models had mean absolute errors ranging from -0.57 to 0.14 meter in the vertical dimension. From these models, water-surface elevation cross sections were obtained and used in a numerical, one-dimensional, steady-flow stepbackwater model to estimate water-surface profiles corresponding to the 50-year flood discharge. From these water-surface profiles, maps of area and depth of inundation were created at the 13 of the 15 selected municipalities. At La Lima only, the area and depth of inundation of the channel capacity in the city was mapped. At Santa Rose de Aguan, no numerical model was created. The 50-year flood and the maps of area and depth of inundation are based on the estimated 50-year storm tide.

  12. Method for the visualization of landform by mapping using low altitude UAV application

    NASA Astrophysics Data System (ADS)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  13. Asbestos Model Accreditation Plan (MAP) Enforcement Response Policy

    EPA Pesticide Factsheets

    The Asbestos Model Accreditation Plan (MAP) (40 CFR 763 Subpart E Appendix C) mandates safety training for those who do asbestos removal work, and implements the additional training requirements mandated by Congress

  14. Mapping the total electron content over Malaysia using Spherical Cap Harmonic Analysis

    NASA Astrophysics Data System (ADS)

    Bahari, S.; Abdullah, M.; Bouya, Z.; Musa, T. A.

    2017-12-01

    The ionosphere over Malaysia is unique because of her location which is in close proximity to the geomagnetic equator and is in the equatorial regions. In this region, the magnetic field is horizontally oriented from south to north and field aligned direction is in the meridional plane (ExB) which becomes the source of equatorial ionospheric anomaly occurrence such as plasma bubble, fountain effects and others. Until today, there is no model that has been developed over Malaysia to study the ionosphere. Due to that, the main objective of this paper is to develop a new technique for mapping the total electron content (TEC) from GPS measurements. Data by myRTKnet network of GPS receiver over Malaysia were used in this study. A new methodology, based on modified spherical cap harmonic analysis (SCHA), was developed to estimate diurnal vertical TEC over the region using GPS observations. The SCHA model is based on longitudinal expansion in Fourier series and fractional Legendre co-latitudinal functions over a spherical cap-like region. The TEC map with spatial resolution of 0.15 ° x 0.15 ° in latitude and longitude with the time resolution of 30 seconds are derived. TEC maps from the SCHA model were compared with the global ionospheric map and other regional models. Result shows that during low solar activity, SCHA model had a better mapping with the accuracy of less than 1 TECU compared to other regional models.

  15. A New Synthetic Global Biomass Carbon Map for the year 2010

    NASA Astrophysics Data System (ADS)

    Spawn, S.; Lark, T.; Gibbs, H.

    2017-12-01

    Satellite technologies have facilitated a recent boom in high resolution, large-scale biomass estimation and mapping. These data are the input into a wide range of global models and are becoming the gold standard for required national carbon (C) emissions reporting. Yet their geographical and/or thematic scope may exclude some or all parts of a given country or region. Most datasets tend to focus exclusively on forest biomass. Grasslands and shrublands generally store less C than forests but cover nearly twice as much global land area and may represent a significant portion of a given country's biomass C stock. To address these shortcomings, we set out to create synthetic, global above- and below-ground biomass maps that combine recently-released satellite based data of standing forest biomass with novel estimates for non-forest biomass stocks that are typically neglected. For forests we integrated existing publicly available regional, global and biome-specific biomass maps and modeled below ground biomass using empirical relationships described in the literature. For grasslands, we developed models for both above- and below-ground biomass based on NPP, mean annual temperature and precipitation to extrapolate field measurements across the globe. Shrubland biomass was extrapolated from existing regional biomass maps using environmental factors to generate the first global estimate of shrub biomass. Our new synthetic map of global biomass carbon circa 2010 represents an update to the IPCC Tier-1 Global Biomass Carbon Map for the Year 2000 (Ruesch and Gibbs, 2008) using the best data currently available. In the absence of a single seamless remotely sensed map of global biomass, our synthetic map provides the only globally-consistent source of comprehensive biomass C data and is valuable for land change analyses, carbon accounting, and emissions modeling.

  16. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  17. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  18. Utilizing soil polypedons to improve model performance for digital soil mapping

    USDA-ARS?s Scientific Manuscript database

    Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...

  19. Sampling intensity and normalizations: Exploring cost-driving factors in nationwide mapping of tree canopy cover

    Treesearch

    John Tipton; Gretchen Moisen; Paul Patterson; Thomas A. Jackson; John Coulston

    2012-01-01

    There are many factors that will determine the final cost of modeling and mapping tree canopy cover nationwide. For example, applying a normalization process to Landsat data used in the models is important in standardizing reflectance values among scenes and eliminating visual seams in the final map product. However, normalization at the national scale is expensive and...

  20. Color Reproduction System Based on Color Appearance Model and Gamut Mapping

    DTIC Science & Technology

    2000-07-01

    and Gamut Mapping DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the following report: TITLE: Input/Output...report: ADP011333 thru ADP011362 UNCLASSIFIED Color reproduction system based on color appearance model and gamut mapping Fang-Hsuan Cheng, Chih-Yuan...perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human

  1. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  2. The Energetic Neutral Atoms of the "Croissant" Heliosphere with Jets

    NASA Astrophysics Data System (ADS)

    Kornbleuth, M. Z.; Opher, M.; Michael, A.

    2017-12-01

    Opher et al. (2015) suggests the heliosphere may have two jets in the tail-ward direction driven to the north and south. This new model, the "Croissant Heliosphere", is in contrast to the classically accepted view of a comet-like tail. We investigate the effect of the heliosphere with jets model on energetic neutral atom (ENA) maps. Regardless of the existence of a split tail, other models show heliosheath plasma confined by the toroidal magnetic field in a "slinky" structure, similar to astrophysical jets bent by the interstellar medium. Therefore, the confinement of the plasma should appear in the ENA maps. ENA maps from the Interstellar Boundary Explorer (IBEX) have recently shown two high latitude lobes with excess ENA flux at higher energies in the tail of the heliosphere. These lobes could be a signature of the two jet structure of the heliosphere, while some have argued they are cause by the fast/slow solar wind profile. Here we present the ENA maps of the "Croissant Heliosphere" using initially a uniform solar wind. We incorporate pick-up ions (PUIs) into our model based on the kinetic modeling of Malama et al. (2006). We include the extinction of PUIs in the heliosheath and describe a locally created PUI population resulting from this extinction process. Additionally, we include the angular dependence of the PUIs based on the work of Vasyliunas & Siscoe (1976). With our model, we find that, in the presence of a uniform solar wind, the "heliosphere with jets" model is able to qualitatively reproduce the lobe structure of the tail seen in IBEX measurements. Turbulence also manifests itself within the lobes of the simulated ENA maps on the order of years. Finally we will present ENA maps using a time-dependent model of the heliosphere with the inclusion of solar cycle.

  3. Spatial land-use inventory, modeling, and projection/Denver metropolitan area, with inputs from existing maps, airphotos, and LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Tom, C.; Miller, L. D.; Christenson, J. W.

    1978-01-01

    A landscape model was constructed with 34 land-use, physiographic, socioeconomic, and transportation maps. A simple Markov land-use trend model was constructed from observed rates of change and nonchange from photointerpreted 1963 and 1970 airphotos. Seven multivariate land-use projection models predicting 1970 spatial land-use changes achieved accuracies from 42 to 57 percent. A final modeling strategy was designed, which combines both Markov trend and multivariate spatial projection processes. Landsat-1 image preprocessing included geometric rectification/resampling, spectral-band, and band/insolation ratioing operations. A new, systematic grid-sampled point training-set approach proved to be useful when tested on the four orginal MSS bands, ten image bands and ratios, and all 48 image and map variables (less land use). Ten variable accuracy was raised over 15 percentage points from 38.4 to 53.9 percent, with the use of the 31 ancillary variables. A land-use classification map was produced with an optimal ten-channel subset of four image bands and six ancillary map variables. Point-by-point verification of 331,776 points against a 1972/1973 U.S. Geological Survey (UGSG) land-use map prepared with airphotos and the same classification scheme showed average first-, second-, and third-order accuracies of 76.3, 58.4, and 33.0 percent, respectively.

  4. Lunar Mapping and Modeling Project

    NASA Technical Reports Server (NTRS)

    Noble, Sarah K.; French, Raymond; Nall,Mark; Muery, Kimberly

    2009-01-01

    The Lunar Mapping and Modeling Project (LMMP) has been created to manage the development of a suite of lunar mapping and modeling products that support the Constellation Program (CxP) and other lunar exploration activities, including the planning, design, development, test and operations associated with lunar sortie missions, crewed and robotic operations on the surface, and the establishment of a lunar outpost. The project draws on expertise from several NASA and non-NASA organizations (MSFC, ARC, GSFC, JPL, CRREL and USGS). LMMP will utilize data predominately from the Lunar Reconnaissance Orbiter, but also historical and international lunar mission data (e.g. Apollo, Lunar Orbiter, Kaguya, Chandrayaan-1), as available and appropriate, to meet Constellation s data needs. LMMP will provide access to this data through a single, common, intuitive and easy to use NASA portal that transparently accesses appropriately sanctioned portions of the widely dispersed and distributed collections of lunar data, products and tools. LMMP will provide such products as DEMs, hazard assessment maps, lighting maps and models, gravity models, and resource maps. We are working closely with the LRO team to prevent duplication of efforts and ensure the highest quality data products. While Constellation is our primary customer, LMMP is striving to be as useful as possible to the lunar science community, the lunar education and public outreach (E/PO) community, and anyone else interested in accessing or utilizing lunar data.

  5. New Tsunami Inundation Maps for California

    NASA Astrophysics Data System (ADS)

    Barberopoulou, Aggeliki; Borrero, Jose; Uslu, Burak; Kanoglu, Utku; Synolakis, Costas

    2010-05-01

    California is the first US State to complete its tsunami inundation mapping. A new generation of tsunami inundation maps is now available for 17 coastal counties.. The new maps offer improved coverage for many areas, they are based on the most recent descriptions of potential tsunami farfield and nearfield sources and use the best available bathymetric and topographic data for modelling. The need for new tsunami maps for California became clear since Synolakis et al (1998) described how inundation projections derived with inundation models that fully calculate the wave evolution over dry land can be as high as twice the values predicted with earlier threshold models, for tsunamis originating from tectonic source. Since the 1998 Papua New Guinea tsunami when the hazard from offshore submarine landslides was better understood (Bardet et al, 2003), the State of California funded the development of the first generation of maps, based on local tectonic and landslide sources. Most of the hazard was dominated by offshore landslides, whose return period remains unknown but is believed to be higher than 1000 years for any given locale, at least in Southern California. The new generation of maps incorporates local and distant scenarios. The partnership between the Tsunami Research Center at USC, the California Emergency Management Agency and the California Seismic Safety Commission let the State to be the first among all US States to complete the maps. (Exceptions include the offshore islands and Newport Beach, where higher resolution maps are under way). The maps were produced with the lowest cost per mile of coastline, per resident or per map than all other States, because of the seamless integration of the USC and NOAA databases and the use of the MOST model. They are a significant improvement over earlier map generations. As part of a continuous improvement in response, mitigation and planning and community education, the California inundation maps can contribute in reducing tsunami risk. References -Bardet, JP et al (2003), Landslide tsunamis: Recent findings and research directions, Pure and Applied Geophysics, 160, (10-11), 1793-1809. -Eisner, R., Borrero, C., Synolakis, C.E. (2001) Inundation Maps for the State of California, International Tsunami Symposium, ITS 2001 Proceedings, NHTMP Review Paper #4, 67-81. -Synolakis, C.E., D. McCarthy, V.V. Titov, J.C. Borrero, (1998) Evaluating the Tsunami Risk in California, CALIFORNIA AND THE WORLD OCEAN '97, 1225-1236, Proceedings ASCE, ISBN: 0-7844-0297-3.

  6. Overcoming complexities for consistent, continental-scale flood mapping

    NASA Astrophysics Data System (ADS)

    Smith, Helen; Zaidman, Maxine; Davison, Charlotte

    2013-04-01

    The EU Floods Directive requires all member states to produce flood hazard maps by 2013. Although flood mapping practices are well developed in Europe, there are huge variations in the scale and resolution of the maps between individual countries. Since extreme flood events are rarely confined to a single country, this is problematic, particularly for the re/insurance industry whose exposures often extend beyond country boundaries. Here, we discuss the challenges of large-scale hydrological and hydraulic modelling, using our experience of developing a 12-country model and set of maps, to illustrate how consistent, high-resolution river flood maps across Europe can be produced. The main challenges addressed include: data acquisition; manipulating the vast quantities of high-resolution data; and computational resources. Our starting point was to develop robust flood-frequency models that are suitable for estimating peak flows for a range of design flood return periods. We used the index flood approach, based on a statistical analysis of historic river flow data pooled on the basis of catchment characteristics. Historical flow data were therefore sourced for each country and collated into a large pan-European database. After a lengthy validation these data were collated into 21 separate analysis zones or regions, grouping smaller river basins according to their physical and climatic characteristics. The very large continental scale basins were each modelled separately on account of their size (e.g. Danube, Elbe, Drava and Rhine). Our methodology allows the design flood hydrograph to be predicted at any point on the river network for a range of return periods. Using JFlow+, JBA's proprietary 2D hydraulic hydrodynamic model, the calculated out-of-bank flows for all watercourses with an upstream drainage area exceeding 50km2 were routed across two different Digital Terrain Models in order to map the extent and depth of floodplain inundation. This generated modelling for a total river length of approximately 250,000km. Such a large-scale, high-resolution modelling exercise is extremely demanding on computational resources and would have been unfeasible without the use of Graphics Processing Units on a network of standard specification gaming computers. Our GPU grid is the world's largest flood-dedicated computer grid. The European river basins were split out into approximately 100 separate hydraulic models and managed individually, although care was taken to ensure flow continuity was maintained between models. The flood hazard maps from the modelling were pieced together using GIS techniques, to provide flood depth and extent information across Europe to a consistent scale and standard. After discussing the methodological challenges, we shall present our flood hazard maps and, from extensive validation work, compare these against historical flow records and observed flood extents.

  7. Choosing appropriate subpopulations for modeling tree canopy cover nationwide

    Treesearch

    Gretchen G. Moisen; John W. Coulston; Barry T. Wilson; Warren B. Cohen; Mark V. Finco

    2012-01-01

    In prior national mapping efforts, the country has been divided into numerous ecologically similar mapping zones, and individual models have been constructed for each zone. Additionally, a hierarchical approach has been taken within zones to first mask out areas of nonforest, then target models of tree attributes within forested areas only. This results in many models...

  8. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2009-03-01

    Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  9. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2008-12-01

    Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  10. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    NASA Astrophysics Data System (ADS)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  11. New Maximum Tsunami Inundation Maps for Use by Local Emergency Planners in the State of California, USA

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.

    2008-12-01

    A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.

  12. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Archuleta County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  13. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, San Miguel County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  14. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Fremont County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  15. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Routt County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  16. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Alamosa and Saguache Counties, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  17. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Dolores County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  18. Developing Land Use Land Cover Maps for the Lower Mekong Basin to Aid SWAT Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Spruce, J.; Bolten, J. D.; Srinivasan, R.

    2017-12-01

    This presentation discusses research to develop Land Use Land Cover (LULC) maps for the Lower Mekong Basin (LMB). Funded by a NASA ROSES Disasters grant, the main objective was to produce updated LULC maps to aid the Mekong River Commission's (MRC's) Soil and Water Assessment Tool (SWAT) hydrologic model. In producing needed LULC maps, temporally processed MODIS monthly NDVI data for 2010 were used as the primary data source for classifying regionally prominent forest and agricultural types. The MODIS NDVI data was derived from processing MOD09 and MYD09 8-day reflectance data with the Time Series Product Tool, a custom software package. Circa 2010 Landsat multispectral data from the dry season were processed into top of atmosphere reflectance mosaics and then classified to derive certain locally common LULC types, such as urban areas and industrial forest plantations. Unsupervised ISODATA clustering was used to derive most LULC classifications. GIS techniques were used to merge MODIS and Landsat classifications into final LULC maps for Sub-Basins (SBs) 1-8 of the LMB. The final LULC maps were produced at 250-meter resolution and delivered to the MRC for use in SWAT modeling for the LMB. A map accuracy assessment was performed for the SB 7 LULC map with 14 classes. This assessment was performed by comparing random locations for sampled LULC types to geospatial reference data such as Landsat RGBs, MODIS NDVI phenologic profiles, high resolution satellite data from Google Map/Earth, and other reference data from the MRC (e.g., crop calendars). LULC accuracy assessment results for SB 7 indicated an overall agreement to reference data of 81% at full scheme specificity. However, by grouping 3 deciduous forest classes into 1 class, the overall agreement improved to 87%. The project enabled updated LULC maps, plus more specific rice types were classified compared to the previous LULC maps. The LULC maps from this project should improve the use of SWAT for modeling hydrology in the LMB, plus improve water and disaster management in a region vulnerable to flooding, droughts, and anthropogenic change (e.g., from dam building and other LULC change).

  19. Application of a simple cerebellar model to geologic surface mapping

    USGS Publications Warehouse

    Hagens, A.; Doveton, J.H.

    1991-01-01

    Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.

  20. Remote-sensing data processing with the multivariate regression analysis method for iron mineral resource potential mapping: a case study in the Sarvian area, central Iran

    NASA Astrophysics Data System (ADS)

    Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran

    2018-03-01

    This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).

  1. Land cover maps, BVOC emissions, and SOA burden in a global aerosol-climate model

    NASA Astrophysics Data System (ADS)

    Stanelle, Tanja; Henrot, Alexandra; Bey, Isaelle

    2015-04-01

    It has been reported that different land cover representations influence the emission of biogenic volatile organic compounds (BVOC) (e.g. Guenther et al., 2006). But the land cover forcing used in model simulations is quite uncertain (e.g. Jung et al., 2006). As a consequence the simulated emission of BVOCs depends on the applied land cover map. To test the sensitivity of global and regional estimates of BVOC emissions on the applied land cover map we applied 3 different land cover maps into our global aerosol-climate model ECHAM6-HAM2.2. We found a high sensitivity for tropical regions. BVOCs are a very prominent precursor for the production of Secondary Organic Aerosols (SOA). Therefore the sensitivity of BVOC emissions on land cover maps impacts the SOA burden in the atmosphere. With our model system we are able to quantify that impact. References: Guenther et al. (2006), Estimates of global terrestrial isoprene emissions using MEGAN, Atmos. Chem. Phys., 6, 3181-3210, doi:10.5194/acp-6-3181-2006. Jung et al. (2006), Exploiting synergies of global land cover products for carbon cycle modeling, Rem. Sens. Environm., 101, 534-553, doi:10.1016/j.rse.2006.01.020.

  2. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix.

    PubMed

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing

    2017-01-01

    Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.

  3. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  4. Spatial Statistics of the Clark County Parcel Map, Trial Geotechnical Models, and Effects on Ground Motions in Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    Savran, W. H.; Louie, J. N.; Pullammanappallil, S.; Pancha, A.

    2011-12-01

    When deterministically modeling the propagation of seismic waves, shallow shear-wave velocity plays a crucial role in predicting shaking effects such as peak ground velocity (PGV). The Clark County Parcel Map provides us with a data set of geotechnical velocities in Las Vegas Valley, at an unprecedented level of detail. Las Vegas Valley is a basin with similar geologic properties to some areas of Southern California. We analyze elementary spatial statistical properties of the Parcel Map, along with calculating its spatial variability. We then investigate these spatial statistics from the PGV results computed from two geotechnical models that incorporate the Parcel Map as parameters. Plotting a histogram of the Parcel Map 30-meter depth-averaged shear velocity (Vs30) values shows the data to approximately fit a bimodal normal distribution with μ1 = 400 m/s, σ1 = 76 m/s, μ2 = 790 m/s, σ2 = 149 m/s, and p = 0.49., where μ is the mean, σ is standard deviation, and p is the probability mixing factor for the bimodal distribution. Based on plots of spatial power spectra, the Parcel Map appears to be fractal over the second and third decades, in kilometers. The spatial spectra possess the same fractal dimension in the N-S and the E-W directions, indicating isotropic scale invariance. We configured finite-difference wave propagation models at 0.5 Hz with LLNL's E3D code, utilizing the Parcel Map as input parameters to compute a PGV data set from a scenario earthquake (Black Hills M6.5). The resulting PGV is fractal over the same spatial frequencies as the Vs30 data sets associated with their respective models. The fractal dimension is systematically lower in all of the PGV maps as opposed to the Vs30 maps, showing that the PGV maps are richer in higher spatial frequencies. This is potentially caused by a lens focusing effects on seismic waves due to spatial heterogeneity in site conditions.

  5. MARs Tools for Interactive ANalysis (MARTIAN): Google Maps Tools for Visual Exploration of Geophysical Modeling on Mars

    NASA Astrophysics Data System (ADS)

    Dimitrova, L. L.; Haines, M.; Holt, W. E.; Schultz, R. A.; Richard, G.; Haines, A. J.

    2006-12-01

    Interactive maps of surface-breaking faults and stress models on Mars provide important tools to engage undergraduate students, educators, and scientists with current geological and geophysical research. We have developed a map based on the Google Maps API -- an Internet based tool combining DHTML and AJAX, -- which allows very large maps to be viewed over the World Wide Web. Typically, small portions of the maps are downloaded as needed, rather than the entire image at once. This set-up enables relatively fast access for users with low bandwidth. Furthermore, Google Maps provides an extensible interactive interface making it ideal for visualizing multiple data sets at the user's choice. The Google Maps API works primarily with data referenced to latitudes and longitudes, which is then mapped in Mercator projection only. We have developed utilities for general cylindrical coordinate systems by converting these coordinates into equivalent Mercator projection before including them on the map. The MARTIAN project is available at http://rock.geo.sunysb.edu/~holt/Mars/MARTIAN/. We begin with an introduction to the Martian surface using a topography model. Faults from several datasets are classified by type (extension vs. compression) and by time epoch. Deviatoric stresses due to gravitational potential energy differences, calculated from the topography and crustal thickness, can be overlain. Several quantitative measures for the fit of the stress field to the faults are also included. We provide introductory text and exercises spanning a range of topics: how are faults identified, what stress is and how it relates to faults, what gravitational potential energy is and how variations in it produce stress, how the models are created, and how these models can be evaluated and interpreted. The MARTIAN tool is used at Stony Brook University in GEO 310: Introduction to Geophysics, a class geared towards junior and senior geosciences majors. Although this project is in its early stages, high school and college teachers, as well as researchers have expressed interest in using and extending these tools for visualizing and interacting with data on Earth and other planetary bodies.

  6. Conformal mapping in optical biosensor applications.

    PubMed

    Zumbrum, Matthew E; Edwards, David A

    2015-09-01

    Optical biosensors are devices used to investigate surface-volume reaction kinetics. Current mathematical models for reaction dynamics rely on the assumption of unidirectional flow within these devices. However, new devices, such as the Flexchip, include a geometry that introduces two-dimensional flow, complicating the depletion of the volume reactant. To account for this, a previous mathematical model is extended to include two-dimensional flow, and the Schwarz-Christoffel mapping is used to relate the physical device geometry to that for a device with unidirectional flow. Mappings for several Flexchip dimensions are considered, and the ligand depletion effect is investigated for one of these mappings. Estimated rate constants are produced for simulated data to quantify the inclusion of two-dimensional flow in the mathematical model.

  7. Generating Multi-Destination Maps.

    PubMed

    Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan

    2017-08-01

    Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.

  8. A Tool for Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto

    2014-05-01

    Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore our landslide susceptibility map was adjusted to further reduce the susceptibility near each road based on the road level (primary, secondary, tertiary). For each model run, we superimposed the spatial location of landslide drops with the road network, and recorded the number, size and location of road blockages recorded, along with landslides within 50 and 100 m of the different road levels. Network analysis tools available in GRASS GIS were also applied to measure the impact upon the road network in terms of connectivity. The model was performed 100 times in a Monte-Carlo simulation for each region. Initial results show reasonable agreement between model output and the observed landslide inventories in terms of the number of road blockages. In Collazzone (length of road network = 153 km, landslide density = 5.2 landslides km-2), the median number of modelled road blockages over 100 model runs was 5 (±2.5 standard deviation) compared to the mapped inventory observed number of 5 road blockages. In Northridge (length of road network = 780 km, landslide density = 8.7 landslides km-2), the median number of modelled road blockages over 100 model runs was 108 (±17.2 standard deviation) compared to the mapped inventory observed number of 48 road blockages. As we progress with model development, we believe this semi-stochastic modelling approach will potentially aid civil protection agencies to explore different scenarios of road network potential damage as the result of different magnitude landslide triggering event scenarios.

  9. Cognitive Style Mapping at Mt. Hood Community College.

    ERIC Educational Resources Information Center

    Keyser, John S.

    1980-01-01

    Describes Mount Hood Community College's experiences using the Modified Hill Model for Cognitive Style Mapping (CSM). Enumerates the nine dimensions of cognitive style assessed by the model. Discusses the value and limitations of CSM, five major checks on the validity of the model, and Mount Hood faculty's involvement with CSM. (AYC)

  10. NCEP SST Analysis

    Science.gov Websites

    Branches Global Climate & Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Contact EMC , state and local government Web resources and services. Real-time, global, sea surface temperature (RTG_SST_HR) analysis For a regional map, click the desired area in the global SST analysis and anomaly maps

  11. Climate-host mapping of Phytophthora ramorum, causal agent of sudden oak death

    Treesearch

    Glenn Fowler; Roger Magarey; Manuel Colunga

    2006-01-01

    Phytophthora ramorum infection was modeled using the NAPPFAST system for the conterminous United States. Parameters used to model P. ramorum infection were: leaf wetness, minimum temperature, optimum temperature and maximum temperature over a specified number of accumulated days. The model was used to create maps showing the...

  12. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  13. Utilization of UARS Data in Validation of Photochemical and Dynamical Mechanism in Stratospheric Models

    NASA Technical Reports Server (NTRS)

    Rodriquez, Jose M.; Hu, Wenjie; Ko, Malcolm K. W.

    1995-01-01

    We proposed model-data intercomparison studies for UARS data. In the past three months, we have been working on constructing analysis tools to diagnose the UARS data. The 'Trajectory mapping' technique, which was developed by Morris (1994), is adaptable to generate synoptic maps of trace gas data from asynoptic observations. An in-house trajectory model (kinematic methods following Merrill et al., 1986 and Pickering et al., 1994) has been developed in AER under contract with NASA/ACMAP and the trajectory mapping tool has been applied to analyze UARS measurement.

  14. Hybrid discrete-time neural networks.

    PubMed

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  15. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    PubMed Central

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-01-01

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430

  16. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    PubMed

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  17. Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses

    PubMed Central

    Zhang, S; Meng, L; Wang, J; Zhang, L

    2017-01-01

    Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population. PMID:28722705

  18. Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses.

    PubMed

    Zhang, S; Meng, L; Wang, J; Zhang, L

    2017-10-01

    Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population.

  19. Risk maps for navigation in liver surgery

    NASA Astrophysics Data System (ADS)

    Hansen, C.; Zidowitz, S.; Schenk, A.; Oldhafer, K.-J.; Lang, H.; Peitgen, H.-O.

    2010-02-01

    The optimal transfer of preoperative planning data and risk evaluations to the operative site is challenging. A common practice is to use preoperative 3D planning models as a printout or as a presentation on a display. One important aspect is that these models were not developed to provide information in complex workspaces like the operating room. Our aim is to reduce the visual complexity of 3D planning models by mapping surgically relevant information onto a risk map. Therefore, we present methods for the identification and classification of critical anatomical structures in the proximity of a preoperatively planned resection surface. Shadow-like distance indicators are introduced to encode the distance from the resection surface to these critical structures on the risk map. In addition, contour lines are used to accentuate shape and spatial depth. The resulting visualization is clear and intuitive, allowing for a fast mental mapping of the current resection surface to the risk map. Preliminary evaluations by liver surgeons indicate that damage to risk structures may be prevented and patient safety may be enhanced using the proposed methods.

  20. Multibeam Sonar Mapping and Modeling of a Submerged Bryophyte Mat in Crater Lake, Oregon

    USGS Publications Warehouse

    Dartnell, Peter; Collier, Robert; Buktenica, Mark; Jessup, Steven; Girdner, Scott; Triezenberg, Peter

    2008-01-01

    Traditionally, multibeam data have been used to map sea floor or lake floor morphology as well as the distribution of surficial facies in order to characterize the geologic component of benthic habitats. In addition to using multibeam data for geologic studies, we want to determine if these data can also be used directly to map the distribution of biota. Multibeam bathymetry and acoustic backscatter data collected in Crater Lake, Oregon, in 2000 are used to map the distribution of a deep-water bryophyte mat, which will be extremely useful for understanding the overall ecology of the lake. To map the bryophyte's distribution, depth range, acoustic backscatter intensity, and derived bathymetric index grids are used as inputs into a hierarchical decision-tree classification model. Observations of the bryophyte mat from over 23 line kilometers of lake-floor video collected in the summer of 2006 are used as controls for the model. The resulting map matches well with ground-truth information and shows that the bryophyte mat covers most of the platform surrounding Wizard Island as well as on outcrops around the caldera wall.

  1. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in; Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030; Bansod, Baban K.S.

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models havemore » been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.« less

  2. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  3. Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Lee, Saro; Chotikasathien, Wisut; Kim, Chang Hwan; Kwon, Ju Hyoung

    2009-04-01

    For predictive landslide susceptibility mapping, this study applied and verified probability model, the frequency ratio and statistical model, logistic regression at Pechabun, Thailand, using a geographic information system (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys, and maps of the topography, geology and land cover were constructed to spatial database. The factors that influence landslide occurrence, such as slope gradient, slope aspect and curvature of topography and distance from drainage were calculated from the topographic database. Lithology and distance from fault were extracted and calculated from the geology database. Land cover was classified from Landsat TM satellite image. The frequency ratio and logistic regression coefficient were overlaid for landslide susceptibility mapping as each factor’s ratings. Then the landslide susceptibility map was verified and compared using the existing landslide location. As the verification results, the frequency ratio model showed 76.39% and logistic regression model showed 70.42% in prediction accuracy. The method can be used to reduce hazards associated with landslides and to plan land cover.

  4. Simulating Electron Cyclotron Maser Emission for Low Mass Stars

    NASA Astrophysics Data System (ADS)

    Llama, Joe; Jardine, Moira

    2018-01-01

    Zeeman-Doppler Imaging (ZDI) is a powerful technique that enables us to map the large-scale magnetic fields of stars spanning the pre- and main-sequence. Coupling these magnetic maps with field extrapolation methods allow us to investigate the topology of the closed, X-ray bright corona, and the cooler, open stellar wind.Using ZDI maps of young M dwarfs with simultaneous radio light curves obtained from the VLA, we present the results of modeling the Electron-Cyclotron Maser (ECM) emission from these systems. We determine the X-ray luminosity and ECM emission that is produced using the ZDI maps and our field extrapolation model. We compare these findings with the observed radio light curves of these stars. This allows us to predict the relative phasing and amplitude of the stellar X-ray and radio light curves.This benchmarking of our model using these systems allows us to predict the ECM emission for all stars that have a ZDI map and an observed X-ray luminosity. Our model allows us to understand the origin of transient radio emission observations and is crucial for disentangling stellar and exoplanetary radio signals.

  5. EMAP and EMAGE: a framework for understanding spatially organized data.

    PubMed

    Baldock, Richard A; Bard, Jonathan B L; Burger, Albert; Burton, Nicolas; Christiansen, Jeff; Feng, Guanjie; Hill, Bill; Houghton, Derek; Kaufman, Matthew; Rao, Jianguo; Sharpe, James; Ross, Allyson; Stevenson, Peter; Venkataraman, Shanmugasundaram; Waterhouse, Andrew; Yang, Yiya; Davidson, Duncan R

    2003-01-01

    The Edinburgh MouseAtlas Project (EMAP) is a time-series of mouse-embryo volumetric models. The models provide a context-free spatial framework onto which structural interpretations and experimental data can be mapped. This enables collation, comparison, and query of complex spatial patterns with respect to each other and with respect to known or hypothesized structure. The atlas also includes a time-dependent anatomical ontology and mapping between the ontology and the spatial models in the form of delineated anatomical regions or tissues. The models provide a natural, graphical context for browsing and visualizing complex data. The Edinburgh Mouse Atlas Gene-Expression Database (EMAGE) is one of the first applications of the EMAP framework and provides a spatially mapped gene-expression database with associated tools for data mapping, submission, and query. In this article, we describe the underlying principles of the Atlas and the gene-expression database, and provide a practical introduction to the use of the EMAP and EMAGE tools, including use of new techniques for whole body gene-expression data capture and mapping.

  6. Cognitive Processes in Orienteering: A Review.

    ERIC Educational Resources Information Center

    Seiler, Roland

    1996-01-01

    Reviews recent research on information processing and decision making in orienteering. The main cognitive demands investigated were selection of relevant map information for route choice, comparison between map and terrain in map reading and in relocation, and quick awareness of mistakes. Presents a model of map reading based on results. Contains…

  7. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NASA Astrophysics Data System (ADS)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity, uncertainty and disagreement) can be positioned in the framework, as can the communities of knowledge construction and valuation involved in the solution of these problems (core science, applied science, and professional consultancy, and “post-normal” science). Mental model maps, this research hypothesises, are suitable to analyse the above aspects of the problem. This hypothesis is tested for the case of the Zwolle storm surch barrier. Analysis can aid integration between disciplines, participation of public stakeholders, and can stimulate learning processes. Mental model mapping is recommended to visualise the use of knowledge, to analyse difficulties in problem solving process, and to aid information transfer and communication. Mental model mapping help scientists to shape their new, post-normal responsibilities in a manner that complies with integrity when dealing with unstructured problems in complex, multifunctional systems.

  8. A River Runs Under It: Modeling the Distribution of Streams and Stream Burial in Large River Basins

    NASA Astrophysics Data System (ADS)

    Elmore, A. J.; Julian, J.; Guinn, S.; Weitzell, R.; Fitzpatrick, M.

    2011-12-01

    Stream network density exerts a strong control on hydrologic processes in watersheds. Over land and through soil and bedrock substrate, water moves slowly and is subject to chemical transformations unique to conditions of continuous contact with geologic materials. In contrast, once water enters stream channels it is efficiently transported out of watersheds, reducing the amount of time for biological uptake and stream nutrient processing. Therefore, stream network density dictates both the relative importance of terrestrial and aquatic influences to stream chemistry and the residence time of water in watersheds, and is critical to modeling and empirical studies aimed at understanding the impact of land use on stream water quantity and quality. Stream network density is largely a function of the number and length of the smallest streams. Methods for mapping and measuring these headwater streams range from simple measurement of stream length from existing maps, to detailed field mapping efforts, which are difficult to implement over large areas. Confounding the simplest approaches, many headwater stream reaches are not included in hydrographical maps, such as the U.S. National Hydrography Dataset (NHD), either because they were buried during the course of urban development or because they were seen as smaller than the minimum mapping size at the time of map generation. These "missing streams" severely limit the effective analyses of stream network density based on the NHD, constituting a major problem for many efforts to understand land-use impacts on streams. Here we report on research that predicts stream presence and absence by coupling field observations of headwater stream channels with maximum entropy models (MaxEnt) commonly implemented in biogeographical studies to model species distributions. The model utilizes terrain variables that are continuously accumulated along hydrologic flowpaths derived from a 10-m digital elevation model. In validation, the model correctly predicts the presence of 91% of all 10-m stream segments, and rarely miscalculates tributary numbers. We apply this model to the entire Potomac River Basin (37,800 km2) and several adjacent basins to map stream channel density and compare our results with NHD flowline data. We find that NHD underestimates stream channel density by a factor of two in most sub watersheds and this effect is strongest in the densely urbanized cities of Washington, DC and Baltimore, MD. We then apply a second predictive model based on impervious surface area data to map the extent of stream burial. Results demonstrate that the extent of stream burial increases with decreasing stream catchment area. When applied at four time steps (1975, 1990, 2001, and 2006), we find that although stream burial rates have slowed in the recent decade, streams that are not mapped in NHD flowline data continue to be buried during development. This work is the most ambitious attempt yet to map stream network density over a large region and will have lasting implications for modeling and conservation efforts.

  9. Virtual environment navigation with look-around mode to explore new real spaces by people who are blind.

    PubMed

    Lahav, Orly; Gedalevitz, Hadas; Battersby, Steven; Brown, David; Evett, Lindsay; Merritt, Patrick

    2018-05-01

    This paper examines the ability of people who are blind to construct a mental map and perform orientation tasks in real space by using Nintendo Wii technologies to explore virtual environments. The participant explores new spaces through haptic and auditory feedback triggered by pointing or walking in the virtual environments and later constructs a mental map, which can be used to navigate in real space. The study included 10 participants who were congenitally or adventitiously blind, divided into experimental and control groups. The research was implemented by using virtual environments exploration and orientation tasks in real spaces, using both qualitative and quantitative methods in its methodology. The results show that the mode of exploration afforded to the experimental group is radically new in orientation and mobility training; as a result 60% of the experimental participants constructed mental maps that were based on map model, compared with only 30% of the control group participants. Using technology that enabled them to explore and to collect spatial information in a way that does not exist in real space influenced the ability of the experimental group to construct a mental map based on the map model. Implications for rehabilitation The virtual cane system for the first time enables people who are blind to explore and collect spatial information via the look-around mode in addition to the walk-around mode. People who are blind prefer to use look-around mode to explore new spaces, as opposed to the walking mode. Although the look-around mode requires users to establish a complex collecting and processing procedure for the spatial data, people who are blind using this mode are able to construct a mental map as a map model. For people who are blind (as for the sighted) construction of a mental map based on map model offers more flexibility in choosing a walking path in a real space, accounting for changes that occur in the space.

  10. Riparian Wetlands: Mapping

    EPA Science Inventory

    Riparian wetlands are critical systems that perform functions and provide services disproportionate to their extent in the landscape. Mapping wetlands allows for better planning, management, and modeling, but riparian wetlands present several challenges to effective mapping due t...

  11. A simple methodology to produce flood risk maps consistent with FEMA's base flood elevation maps: Implementation and validation over the entire contiguous United States

    NASA Astrophysics Data System (ADS)

    Goteti, G.; Kaheil, Y. H.; Katz, B. G.; Li, S.; Lohmann, D.

    2011-12-01

    In the United States, government agencies as well as the National Flood Insurance Program (NFIP) use flood inundation maps associated with the 100-year return period (base flood elevation, BFE), produced by the Federal Emergency Management Agency (FEMA), as the basis for flood insurance. A credibility check of the flood risk hydraulic models, often employed by insurance companies, is their ability to reasonably reproduce FEMA's BFE maps. We present results from the implementation of a flood modeling methodology aimed towards reproducing FEMA's BFE maps at a very fine spatial resolution using a computationally parsimonious, yet robust, hydraulic model. The hydraulic model used in this study has two components: one for simulating flooding of the river channel and adjacent floodplain, and the other for simulating flooding in the remainder of the catchment. The first component is based on a 1-D wave propagation model, while the second component is based on a 2-D diffusive wave model. The 1-D component captures the flooding from large-scale river transport (including upstream effects), while the 2-D component captures the flooding from local rainfall. The study domain consists of the contiguous United States, hydrologically subdivided into catchments averaging about 500 km2 in area, at a spatial resolution of 30 meters. Using historical daily precipitation data from the Climate Prediction Center (CPC), the precipitation associated with the 100-year return period event was computed for each catchment and was input to the hydraulic model. Flood extent from the FEMA BFE maps is reasonably replicated by the 1-D component of the model (riverine flooding). FEMA's BFE maps only represent the riverine flooding component and are unavailable for many regions of the USA. However, this modeling methodology (1-D and 2-D components together) covers the entire contiguous USA. This study is part of a larger modeling effort from Risk Management Solutions° (RMS) to estimate flood risk associated with extreme precipitation events in the USA. Towards this greater objective, state-of-the-art models of flood hazard and stochastic precipitation are being implemented over the contiguous United States. Results from the successful implementation of the modeling methodology will be presented.

  12. Usage of Data-Encoded Web Maps with Client Side Color Rendering for Combined Data Access, Visualization and Modeling Purposes

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narashimha S.

    2013-01-01

    Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.

  13. Topography- and nightlight-based national flood risk assessment in Canada

    NASA Astrophysics Data System (ADS)

    Elshorbagy, Amin; Bharath, Raja; Lakhanpal, Anchit; Ceola, Serena; Montanari, Alberto; Lindenschmidt, Karl-Erich

    2017-04-01

    In Canada, flood analysis and water resource management, in general, are tasks conducted at the provincial level; therefore, unified national-scale approaches to water-related problems are uncommon. In this study, a national-scale flood risk assessment approach is proposed and developed. The study focuses on using global and national datasets available with various resolutions to create flood risk maps. First, a flood hazard map of Canada is developed using topography-based parameters derived from digital elevation models, namely, elevation above nearest drainage (EAND) and distance from nearest drainage (DFND). This flood hazard mapping method is tested on a smaller area around the city of Calgary, Alberta, against a flood inundation map produced by the city using hydraulic modelling. Second, a flood exposure map of Canada is developed using a land-use map and the satellite-based nightlight luminosity data as two exposure parameters. Third, an economic flood risk map is produced, and subsequently overlaid with population density information to produce a socioeconomic flood risk map for Canada. All three maps of hazard, exposure, and risk are classified into five classes, ranging from very low to severe. A simple way to include flood protection measures in hazard estimation is also demonstrated using the example of the city of Winnipeg, Manitoba. This could be done for the entire country if information on flood protection across Canada were available. The evaluation of the flood hazard map shows that the topography-based method adopted in this study is both practical and reliable for large-scale analysis. Sensitivity analysis regarding the resolution of the digital elevation model is needed to identify the resolution that is fine enough for reliable hazard mapping, but coarse enough for computational tractability. The nightlight data are found to be useful for exposure and risk mapping in Canada; however, uncertainty analysis should be conducted to investigate the effect of the overglow phenomenon on flood risk mapping.

  14. Multisensor Modeling Underwater with Uncertain Information

    DTIC Science & Technology

    1988-09-01

    the Clipperton Zone. The data used for stochastic modeling were supplied by NECOR at the University of Rhode Island . by courtesy of Dr. Dave Gallo of...artifacts ............................. 133 Figure 6.5: Sea MARC I intensity map of Clipperton area ............... .136 Figure 6.6: Sea MARC I intensity...map of Clipperton area (from Kastens et ,11.). .. 137 Figure 6.7: Sea Beam contour map of Clipperton area .................. .138 Figure 6.8: Sea Beam

  15. How Art Works: The National Endowment for the Arts' Five-Year Research Agenda, with a System Map and Measurement Model. Appendix A & B

    ERIC Educational Resources Information Center

    National Endowment for the Arts, 2012

    2012-01-01

    This paper presents two appendices supporting the "How Art Works: The National Endowment for the Arts' Five-Year Research Agenda, with a System Map and Measurement Model" report. In Appendix A, brief descriptions of relevant studies and datasets for each node in the "How Art Works" system map are presented. This appendix is meant to supply…

  16. Hungarian contribution to the Global Soil Organic Carbon Map (GSOC17) using advanced machine learning algorithms and geostatistics

    NASA Astrophysics Data System (ADS)

    Szatmári, Gábor; Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2017-04-01

    The knowledge about soil organic carbon (SOC) baselines and changes, and the detection of vulnerable hot spots for SOC losses and gains under climate change and changed land management is still fairly limited. Thus Global Soil Partnership (GSP) has been requested to develop a global SOC mapping campaign by 2017. GSPs concept builds on official national data sets, therefore, a bottom-up (country-driven) approach is pursued. The elaborated Hungarian methodology suits the general specifications of GSOC17 provided by GSP. The input data for GSOC17@HU mapping approach has involved legacy soil data bases, as well as proper environmental covariates related to the main soil forming factors, such as climate, organisms, relief and parent material. Nowadays, digital soil mapping (DSM) highly relies on the assumption that soil properties of interest can be modelled as a sum of a deterministic and stochastic component, which can be treated and modelled separately. We also adopted this assumption in our methodology. In practice, multiple regression techniques are commonly used to model the deterministic part. However, this global (and usually linear) models commonly oversimplify the often complex and non-linear relationship, which has a crucial effect on the resulted soil maps. Thus, we integrated machine learning algorithms (namely random forest and quantile regression forest) in the elaborated methodology, supposing then to be more suitable for the problem in hand. This approach has enable us to model the GSOC17 soil properties in that complex and non-linear forms as the soil itself. Furthermore, it has enable us to model and assess the uncertainty of the results, which is highly relevant in decision making. The applied methodology has used geostatistical approach to model the stochastic part of the spatial variability of the soil properties of interest. We created GSOC17@HU map with 1 km grid resolution according to the GSPs specifications. The map contributes to the GSPs GSOC17 proposals, as well as to the development of global soil information system under GSP Pillar 4 on soil data and information. However, we elaborated our adherent code (created in R software environment) in such a way that it can be improved, specified and applied for further uses. Hence, it opens the door to create countrywide map(s) with higher grid resolution for SOC (or other soil related properties) using the advanced methodology, as well as to contribute and support the SOC (or other soil) related country level decision making. Our paper will present the soil mapping methodology itself, the resulted GSOC17@HU map, some of our conclusions drawn from the experiences and their effects on the further uses. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  17. NoiseMap and AEDT Gap Analysis

    DOT National Transportation Integrated Search

    2017-09-30

    NoiseMap and the Aviation Environmental Design Tool (AEDT) both use an integrated modeling approach to calculate aircraft noise in and around an airfield. Both models also employ the same general overall approach by using airfield operational data, s...

  18. Subsite mapping of enzymes. Depolymerase computer modelling.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629

  19. Incorporating Yearly Derived Winter Wheat Maps Into Winter Wheat Yield Forecasting Model

    NASA Technical Reports Server (NTRS)

    Skakun, S.; Franch, B.; Roger, J.-C.; Vermote, E.; Becker-Reshef, I.; Justice, C.; Santamaría-Artigas, A.

    2016-01-01

    Wheat is one of the most important cereal crops in the world. Timely and accurate forecast of wheat yield and production at global scale is vital in implementing food security policy. Becker-Reshef et al. (2010) developed a generalized empirical model for forecasting winter wheat production using remote sensing data and official statistics. This model was implemented using static wheat maps. In this paper, we analyze the impact of incorporating yearly wheat masks into the forecasting model. We propose a new approach of producing in season winter wheat maps exploiting satellite data and official statistics on crop area only. Validation on independent data showed that the proposed approach reached 6% to 23% of omission error and 10% to 16% of commission error when mapping winter wheat 2-3 months before harvest. In general, we found a limited impact of using yearly winter wheat masks over a static mask for the study regions.

  20. 2D discontinuous piecewise linear map: Emergence of fashion cycles.

    PubMed

    Gardini, L; Sushko, I; Matsuyama, K

    2018-05-01

    We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.

  1. An application of a hydraulic model simulator in flood risk assessment under changing climatic conditions

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, J. M.; Romanowicz, R. J.

    2016-12-01

    The standard procedure of climate change impact assessment on future hydrological extremes consists of a chain of consecutive actions, starting from the choice of GCM driven by an assumed CO2 scenario, through downscaling of climatic forcing to a catchment scale, estimation of hydrological extreme indices using hydrological modelling tools and subsequent derivation of flood risk maps with the help of a hydraulic model. Among many possible sources of uncertainty, the main are the uncertainties related to future climate scenarios, climate models, downscaling techniques and hydrological and hydraulic models. Unfortunately, we cannot directly assess the impact of these different sources of uncertainties on flood risk in future due to lack of observations of future climate realizations. The aim of this study is an assessment of a relative impact of different sources of uncertainty on the uncertainty of flood risk maps. Due to the complexity of the processes involved, an assessment of total uncertainty of maps of inundation probability might be very computer time consuming. As a way forward we present an application of a hydraulic model simulator based on a nonlinear transfer function model for the chosen locations along the river reach. The transfer function model parameters are estimated based on the simulations of the hydraulic model at each of the model cross-sections. The study shows that the application of a simulator substantially reduces the computer requirements related to the derivation of flood risk maps under future climatic conditions. Biala Tarnowska catchment, situated in southern Poland is used as a case study. Future discharges at the input to a hydraulic model are obtained using the HBV model and climate projections obtained from the EUROCORDEX project. The study describes a cascade of uncertainty related to different stages of the process of derivation of flood risk maps under changing climate conditions. In this context it takes into account the uncertainty of future climate projections, an uncertainty of flow routing model, the propagation of that uncertainty through the hydraulic model, and finally, the uncertainty related to the derivation of flood risk maps.

  2. Effective Integration of Earth Observation Data and Flood Modeling for Rapid Disaster Response: The Texas 2015 Case

    NASA Astrophysics Data System (ADS)

    Schumann, G.

    2016-12-01

    Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.

  3. Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain

    NASA Astrophysics Data System (ADS)

    Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei

    2018-06-01

    Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.

  4. A Statistical Examination of Magnetic Field Model Accuracy for Mapping Geosynchronous Solar Energetic Particle Observations to Lower Earth Orbits

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Kress, B. T.; Rodriguez, J. V.; McCollough, J. P.

    2013-12-01

    Operational specifications of space environmental hazards can be an important input used by decision makers. Ideally the specification would come from on-board sensors, but for satellites where that capability is not available another option is to map data from remote observations to the location of the satellite. This requires a model of the physical environment and an understanding of its accuracy for mapping applications. We present a statistical comparison between magnetic field model mappings of solar energetic particle observations made by NOAA's Geostationary Operational Environmental Satellites (GOES) to the location of the Combined Release and Radiation Effects Satellite (CRRES). Because CRRES followed a geosynchronous transfer orbit which precessed in local time this allows us to examine the model accuracy between LEO and GEO orbits across a range of local times. We examine the accuracy of multiple magnetic field models using a variety of statistics and examine their utility for operational purposes.

  5. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  6. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  7. The Open Flux Problem

    NASA Astrophysics Data System (ADS)

    Linker, J. A.; Caplan, R. M.; Downs, C.; Riley, P.; Mikic, Z.; Lionello, R.; Henney, C. J.; Arge, C. N.; Liu, Y.; Derosa, M. L.; Yeates, A.; Owens, M. J.

    2017-10-01

    The heliospheric magnetic field is of pivotal importance in solar and space physics. The field is rooted in the Sun’s photosphere, where it has been observed for many years. Global maps of the solar magnetic field based on full-disk magnetograms are commonly used as boundary conditions for coronal and solar wind models. Two primary observational constraints on the models are (1) the open field regions in the model should approximately correspond to coronal holes (CHs) observed in emission and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. In this study, we calculate both magnetohydrodynamic and potential field source surface solutions using 14 different magnetic maps produced from five different types of observatory magnetograms, for the time period surrounding 2010 July. We have found that for all of the model/map combinations, models that have CH areas close to observations underestimate the interplanetary magnetic flux, or, conversely, for models to match the interplanetary flux, the modeled open field regions are larger than CHs observed in EUV emission. In an alternative approach, we estimate the open magnetic flux entirely from solar observations by combining automatically detected CHs for Carrington rotation 2098 with observatory synoptic magnetic maps. This approach also underestimates the interplanetary magnetic flux. Our results imply that either typical observatory maps underestimate the Sun’s magnetic flux, or a significant portion of the open magnetic flux is not rooted in regions that are obviously dark in EUV and X-ray emission.

  8. A Biome map for Modelling Global Mid-Pliocene Climate Change

    NASA Astrophysics Data System (ADS)

    Salzmann, U.; Haywood, A. M.

    2006-12-01

    The importance of vegetation-climate feedbacks was highlighted by several paleo-climate modelling exercises but their role as a boundary condition in Tertiary modelling has not been fully recognised or explored. Several paleo-vegetation datasets and maps have been produced for specific time slabs or regions for the Tertiary, but the vegetation classifications that have been used differ, thus making meaningful comparisons difficult. In order to facilitate further investigations into Tertiary climate and environmental change we are presently implementing the comprehensive GIS database TEVIS (Tertiary Environment and Vegetation Information System). TEVIS integrates marine and terrestrial vegetation data, taken from fossil pollen, leaf or wood, into an internally consistent classification scheme to produce for different time slabs global Tertiary Biome and Mega- Biome maps (Harrison & Prentice, 2003). In the frame of our ongoing 5-year programme we present a first global vegetation map for the mid-Pliocene time slab, a period of sustained global warmth. Data were synthesised from the PRISM data set (Thompson and Fleming 1996) after translating them to the Biome classification scheme and from new literature. The outcomes of the Biome map are compared with modelling results using an advanced numerical general circulation model (HadAM3) and the BIOME 4 vegetation model. Our combined proxy data and modelling approach will provide new palaeoclimate datasets to test models that are used to predict future climate change, and provide a more rigorous picture of climate and environmental changes during the Neogene.

  9. GIS-based groundwater potential mapping using boosted regression tree, classification and regression tree, and random forest machine learning models in Iran.

    PubMed

    Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali

    2016-01-01

    Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.

  10. The Open Flux Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linker, J. A.; Caplan, R. M.; Downs, C.

    The heliospheric magnetic field is of pivotal importance in solar and space physics. The field is rooted in the Sun’s photosphere, where it has been observed for many years. Global maps of the solar magnetic field based on full-disk magnetograms are commonly used as boundary conditions for coronal and solar wind models. Two primary observational constraints on the models are (1) the open field regions in the model should approximately correspond to coronal holes (CHs) observed in emission and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. Inmore » this study, we calculate both magnetohydrodynamic and potential field source surface solutions using 14 different magnetic maps produced from five different types of observatory magnetograms, for the time period surrounding 2010 July. We have found that for all of the model/map combinations, models that have CH areas close to observations underestimate the interplanetary magnetic flux, or, conversely, for models to match the interplanetary flux, the modeled open field regions are larger than CHs observed in EUV emission. In an alternative approach, we estimate the open magnetic flux entirely from solar observations by combining automatically detected CHs for Carrington rotation 2098 with observatory synoptic magnetic maps. This approach also underestimates the interplanetary magnetic flux. Our results imply that either typical observatory maps underestimate the Sun’s magnetic flux, or a significant portion of the open magnetic flux is not rooted in regions that are obviously dark in EUV and X-ray emission.« less

  11. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  12. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    PubMed

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  13. Soil-geographical regionalization as a basis for digital soil mapping: Karelia case study

    NASA Astrophysics Data System (ADS)

    Krasilnikov, P.; Sidorova, V.; Dubrovina, I.

    2010-12-01

    Recent development of digital soil mapping (DSM) allowed improving significantly the quality of soil maps. We tried to make a set of empirical models for the territory of Karelia, a republic at the North-East of the European territory of Russian Federation. This territory was selected for the pilot study for DSM for two reasons. First, the soils of the region are mainly monogenetic; thus, the effect of paleogeographic environment on recent soils is reduced. Second, the territory was poorly mapped because of low agricultural development: only 1.8% of the total area of the republic is used for agriculture and has large-scale soil maps. The rest of the territory has only small-scale soil maps, compiled basing on the general geographic concepts rather than on field surveys. Thus, the only solution for soil inventory was the predictive digital mapping. The absence of large-scaled soil maps did not allow data mining from previous soil surveys, and only empirical models could be applied. For regionalization purposes, we accepted the division into Northern and Southern Karelia, proposed in the general scheme of soil regionalization of Russia; boundaries between the regions were somewhat modified. Within each region, we specified from 15 (Northern Karelia) to 32 (Southern Karelia) individual soilscapes and proposed soil-topographic and soil-lithological relationships for every soilscape. Further field verification is needed to adjust the models.

  14. Simultaneous epicardial and noncontact endocardial mapping of the canine right atrium: simulation and experiment.

    PubMed

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.

  15. Simultaneous Epicardial and Noncontact Endocardial Mapping of the Canine Right Atrium: Simulation and Experiment

    PubMed Central

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778

  16. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.

  17. Topographic map of the western region of Dao Vallis in Hellas Planitia, Mars; MTM 500k -40/082E OMKT

    USGS Publications Warehouse

    Rosiek, Mark R.; Redding, Bonnie L.; Galuszka, Donna M.

    2006-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. Contours were derived from a digital terrain model (DTM) compiled on a digital photogrammetric workstation using Viking Orbiter stereo image pairs with orientation parameters derived from an analytic aerotriangulation. The image base for this map employs Viking Orbiter images from orbits 406 and 363. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models.

  18. Scoping of Flood Hazard Mapping Needs for Belknap County, New Hampshire

    DTIC Science & Technology

    2006-01-01

    DEM Digital Elevation Model DFIRM Digital Flood Insurance Rate Map DOQ Digital Orthophoto Quadrangle DOQQ Digital Ortho Quarter Quadrangle DTM...Agriculture Imag- ery Program (NAIP) color Digital Orthophoto Quadrangles (DOQs)). Remote sensing, base map information, GIS data (for example, contour data...found on USGS topographic maps. More recently developed data were derived from digital orthophotos providing improved base map accuracy. NH GRANIT is

  19. Scoping of Flood Hazard Mapping Needs for Coos County, New Hampshire

    DTIC Science & Technology

    2006-01-01

    Technical Partner DEM Digital Elevation Model DFIRM Digital Flood Insurance Rate Map DOQ Digital Orthophoto Quadrangle DOQQ Digital Ortho Quarter Quadrangle...color Digital Orthophoto Quadrangles (DOQs)). Remote sensing, base map information, GIS data (for example, contour data, E911 data, Digital Elevation...the feature types found on USGS topographic maps. More recently developed data were derived from digital orthophotos providing improved base map

  20. exocartographer: Constraining surface maps orbital parameters of exoplanets

    NASA Astrophysics Data System (ADS)

    Farr, Ben; Farr, Will M.; Cowan, Nicolas B.; Haggard, Hal M.; Robinson, Tyler

    2018-05-01

    exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

  1. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global hazard models, calculating the spatial area of the existing hazard maps exceeded by the combined ShakeMap ground motions. In general, these analyses suggest that existing global, and regional, hazard maps tend to overestimate hazard. Both the Atlas of ShakeMaps and EXPO-CAT have many potential uses for examining earthquake risk and epidemiology. All of the datasets discussed herein are available for download on the PAGER Web page ( http://earthquake.usgs.gov/ eqcenter/pager/prodandref/ ). ?? 2009 Springer Science+Business Media B.V.

  2. Accurate Mobile Urban Mapping via Digital Map-Based SLAM †

    PubMed Central

    Roh, Hyunchul; Jeong, Jinyong; Cho, Younggun; Kim, Ayoung

    2016-01-01

    This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS. PMID:27548175

  3. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    PubMed

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  4. Geomorphologic Map of Titan's Polar Terrains

    NASA Astrophysics Data System (ADS)

    Birch, S. P. D.; Hayes, A. G.; Malaska, M. J.; Lopes, R. M. C.; Schoenfeld, A.; Williams, D. A.

    2016-06-01

    Titan's lakes and seas contain vast amounts of information regarding the history and evolution of Saturn's largest moon. To understand this landscape, we created a geomorphologic map, and then used our map to develop an evolutionary model.

  5. Construction of cosmic string induced temperature anisotropy maps with CMBFAST and statistical analysis

    NASA Astrophysics Data System (ADS)

    Simatos, N.; Perivolaropoulos, L.

    2001-01-01

    We use the publicly available code CMBFAST, as modified by Pogosian and Vachaspati, to simulate the effects of wiggly cosmic strings on the cosmic microwave background (CMB). Using the modified CMBFAST code, which takes into account vector modes and models wiggly cosmic strings by the one-scale model, we go beyond the angular power spectrum to construct CMB temperature maps with a resolution of a few degrees. The statistics of these maps are then studied using conventional and recently proposed statistical tests optimized for the detection of hidden temperature discontinuities induced by the Gott-Kaiser-Stebbins effect. We show, however, that these realistic maps cannot be distinguished in a statistically significant way from purely Gaussian maps with an identical power spectrum.

  6. Perceived usefulness, perceived ease of use, and perceived enjoyment as drivers for the user acceptance of interactive mobile maps

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Yusof, Muhammad Mat

    2016-08-01

    This study examines the user perception of usefulness, ease of use and enjoyment as drivers for the users' complex interaction with map on mobile devices. TAM model was used to evaluate users' intention to use and their acceptance of interactive mobile map using the above three beliefs as antecedents. Quantitative research (survey) methodology was employed and the analysis and findings showed that all the three explanatory variables used in this study, explain the variability in the user acceptance of interactive mobile map technology. Perceived usefulness, perceived ease of use, and perceived enjoyment each have significant positive influence on user acceptance of interactive mobile maps. This study further validates the TAM model.

  7. Genetic mapping in the presence of genotyping errors.

    PubMed

    Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-08-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.

  8. Genetic Mapping in the Presence of Genotyping Errors

    PubMed Central

    Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-01-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374

  9. A financial market model with two discontinuities: Bifurcation structures in the chaotic domain

    NASA Astrophysics Data System (ADS)

    Panchuk, Anastasiia; Sushko, Iryna; Westerhoff, Frank

    2018-05-01

    We continue the investigation of a one-dimensional piecewise linear map with two discontinuity points. Such a map may arise from a simple asset-pricing model with heterogeneous speculators, which can help us to explain the intricate bull and bear behavior of financial markets. Our focus is on bifurcation structures observed in the chaotic domain of the map's parameter space, which is associated with robust multiband chaotic attractors. Such structures, related to the map with two discontinuities, have been not studied before. We show that besides the standard bandcount adding and bandcount incrementing bifurcation structures, associated with two partitions, there exist peculiar bandcount adding and bandcount incrementing structures involving all three partitions. Moreover, the map's three partitions may generate intriguing bistability phenomena.

  10. Improving national-scale invasion maps: Tamarisk in the western United States

    USGS Publications Warehouse

    Jarnevich, C.S.; Evangelista, P.; Stohlgren, T.J.; Morisette, J.

    2011-01-01

    New invasions, better field data, and novel spatial-modeling techniques often drive the need to revisit previous maps and models of invasive species. Such is the case with the at least 10 species of Tamarix, which are invading riparian systems in the western United States and expanding their range throughout North America. In 2006, we developed a National Tamarisk Map by using a compilation of presence and absence locations with remotely sensed data and statistical modeling techniques. Since the publication of that work, our database of Tamarix distributions has grown significantly. Using the updated database of species occurrence, new predictor variables, and the maximum entropy (Maxent) model, we have revised our potential Tamarix distribution map for the western United States. Distance-to-water was the strongest predictor in the model (58.1%), while mean temperature of the warmest quarter was the second best predictor (18.4%). Model validation, averaged from 25 model iterations, indicated that our analysis had strong predictive performance (AUC = 0.93) and that the extent of Tamarix distributions is much greater than previously thought. The southwestern United States had the greatest suitable habitat, and this result differed from the 2006 model. Our work highlights the utility of iterative modeling for invasive species habitat modeling as new information becomes available. ?? 2011.

  11. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    PubMed

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Toward a Graded Psycholexical Space Mapping Model: Sublexical and Lexical Representations in Chinese Character Reading Development.

    PubMed

    Tong, Xiuli; McBride, Catherine

    2017-07-01

    Following a review of contemporary models of word-level processing for reading and their limitations, we propose a new hypothetical model of Chinese character reading, namely, the graded lexical space mapping model that characterizes how sublexical radicals and lexical information are involved in Chinese character reading development. The underlying assumption of this model is that Chinese character recognition is a process of competitive mappings of phonology, semantics, and orthography in both lexical and sublexical systems, operating as functions of statistical properties of print input based on the individual's specific level of reading. This model leads to several testable predictions concerning how the quasiregularity and continuity of Chinese-specific radicals are organized in memory for both child and adult readers at different developmental stages of reading.

  13. Mapping the Structure of Knowledge for Teaching Nominal Categorical Data Analysis

    ERIC Educational Resources Information Center

    Groth, Randall E.; Bergner, Jennifer A.

    2013-01-01

    This report describes a model for mapping cognitive structures related to content knowledge for teaching. The model consists of knowledge elements pertinent to teaching a content domain, the nature of the connections among them, and a means for representing the elements and connections visually. The model is illustrated through empirical data…

  14. Comparing five modelling techniques for predicting forest characteristics

    Treesearch

    Gretchen G. Moisen; Tracey S. Frescino

    2002-01-01

    Broad-scale maps of forest characteristics are needed throughout the United States for a wide variety of forest land management applications. Inexpensive maps can be produced by modelling forest class and structure variables collected in nationwide forest inventories as functions of satellite-based information. But little work has been directed at comparing modelling...

  15. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  16. Computer Games versus Maps before Reading Stories: Priming Readers' Spatial Situation Models

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Majchrzak, Dan; Hayes, Shelley; Drobisz, Jack

    2011-01-01

    The current study investigated how computer games and maps compare as preparation for readers to comprehend and retain spatial relations in text narratives. Readers create situation models of five dimensions: spatial, temporal, causal, goal, and protagonist (Zwaan, Langston, & Graesser 1995). Of these five, readers mentally model the spatial…

  17. Chapter 5. Using Habitat Models for Habitat Mapping and Monitoring

    Treesearch

    Samuel A. Cushman; Timothy J. Mersmann; Gretchen G. Moisen; Kevin S. McKelvey; Christina D. Vojta

    2013-01-01

    This chapter provides guidance for applying existing habitat models to map and monitor wildlife habitat. Chapter 2 addresses the use of conceptual models to create a solid foundation for selecting habitat attributes to monitor and to translate these attributes into quantifiable and reportable monitoring measures. Most wildlife species, however, require a complex suite...

  18. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  19. Peeking Beneath the Caldera: Communicating Subsurface Knowledge of Newberry Volcano

    NASA Astrophysics Data System (ADS)

    Mark-Moser, M.; Rose, K.; Schultz, J.; Cameron, E.

    2016-12-01

    "Imaging the Subsurface: Enhanced Geothermal Systems and Exploring Beneath Newberry Volcano" is an interactive website that presents a three-dimensional subsurface model of Newberry Volcano developed at National Energy Technology Laboratory (NETL). Created using the Story Maps application by ArcGIS Online, this format's dynamic capabilities provide the user the opportunity for multimedia engagement with the datasets and information used to build the subsurface model. This website allows for an interactive experience that the user dictates, including interactive maps, instructive videos and video capture of the subsurface model, and linked information throughout the text. This Story Map offers a general background on the technology of enhanced geothermal systems and the geologic and development history of Newberry Volcano before presenting NETL's modeling efforts that support the installation of enhanced geothermal systems. The model is driven by multiple geologic and geophysical datasets to compare and contrast results which allow for the targeting of potential EGS sites and the reduction of subsurface uncertainty. This Story Map aims to communicate to a broad audience, and provides a platform to effectively introduce the model to researchers and stakeholders.

  20. Mapping forest functional type in a forest-shrubland ecotone using SPOT imagery and predictive habitat distribution modelling

    USGS Publications Warehouse

    Assal, Timothy J.; Anderson, Patrick J.; Sibold, Jason

    2015-01-01

    The availability of land cover data at local scales is an important component in forest management and monitoring efforts. Regional land cover data seldom provide detailed information needed to support local management needs. Here we present a transferable framework to model forest cover by major plant functional type using aerial photos, multi-date Système Pour l’Observation de la Terre (SPOT) imagery, and topographic variables. We developed probability of occurrence models for deciduous broad-leaved forest and needle-leaved evergreen forest using logistic regression in the southern portion of the Wyoming Basin Ecoregion. The model outputs were combined into a synthesis map depicting deciduous and coniferous forest cover type. We evaluated the models and synthesis map using a field-validated, independent data source. Results showed strong relationships between forest cover and model variables, and the synthesis map was accurate with an overall correct classification rate of 0.87 and Cohen’s kappa value of 0.81. The results suggest our method adequately captures the functional type, size, and distribution pattern of forest cover in a spatially heterogeneous landscape.

  1. Uncertainty in geological linework: communicating the expert's tacit model to the data user(s) by expert elicitation.

    NASA Astrophysics Data System (ADS)

    Lawley, Russell; Barron, Mark; Lee, Katy

    2014-05-01

    Uncertainty in geological linework: communicating the expert's tacit model to the data user(s) by expert elicitation. R. Lawley, M. Barron and K. Lee. NERC - British Geological Survey, Environmental Science Centre, Keyworth, Nottingham, UK, NG12 5GG The boundaries mapped in traditional field geological survey are subject to a wide range of inherent uncertainties. A map at a survey-scale of 1:10,000 is created by a combination of terrain interpretation, direct observations from boreholes and exposures (often sparsely distributed), and indirect interpretation of proxy variables such as soil properties, vegetation and remotely sensed images. A critical factor influencing the quality of the final map is the skill and experience of the surveyor to bring this information together in a coherent conceptual model. The users of geological data comprising or based on mapped boundaries are increasingly aware of these uncertainties, and want to know how to manage them. The growth of 3D modelling, which takes 2D surveys as a starting point, adds urgency to the need for a better understanding of survey uncertainties; particularly where 2D mapping of variable vintage has been compiled into a national coverage. Previous attempts to apply confidence on the basis of metrics such as data density, survey age or survey techniques have proved useful for isolating single, critical, factors but do not generally succeed in evaluating geological mapping 'in the round', because they cannot account for the 'conceptual' skill set of the surveyor. The British Geological Survey (BGS) is using expert elicitation methods to gain a better understanding of uncertainties within the national geological map of Great Britain. The expert elicitation approach starts with the assumption that experienced surveyors have an intuitive sense of the uncertainty of the boundaries that they map, based on a tacit model of geology and its complexity and the nature of the surveying process. The objective of elicitation is to extract this model in a useable, quantitative, form by a robust and transparent procedure. At BGS expert elicitation is being used to evaluate the uncertainty of mapped boundaries in different common mapping scenarios, with a view to building a 'collective' understanding of the challenges each scenario presents. For example, a 'sharp contact (at surface) between highly contrasting sedimentary rocks' represents one level of survey challenge that should be accurately met by all surveyors, even novices. In contrast, a 'transitional boundary defined by localised facies-variation' may require much more experience to resolve (without recourse to significantly more sampling). We will describe the initial phase of this exercise in which uncertainty models were elicited for mapped boundaries in six contrasting scenarios. Each scenario was presented to a panel of experts with varied expertise and career history. In five cases it was possible to arrive at a consensus model, in a sixth case experts with different experience took different views of the nature of the mapping problem. We will discuss our experience of the use of elicitation methodology and the implications of our results for further work at the BGS to quantify uncertainty in map products. In particular we will consider the value of elicitation as a means to capture the expertise of individuals as they retire, and as the composition of the organization's staff changes in response to the management and policy decisions.

  2. Moon Trek: NASA's New Online Portal for Lunar Mapping and Modeling

    NASA Astrophysics Data System (ADS)

    Day, B. H.; Law, E. S.

    2016-11-01

    This presentation introduces Moon Trek, a new name for a major new release of NASA's Lunar Mapping and Modeling Portal (LMMP). The new Trek interface provides greatly improved navigation, 3D visualization, performance, and reliability.

  3. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  4. Observed and forecast flood-inundation mapping application-A pilot study of an eleven-mile reach of the White River, Indianapolis, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Morlock, Scott E.; Arihood, Leslie D.; Kiesler, James L.

    2011-01-01

    Near-real-time and forecast flood-inundation mapping products resulted from a pilot study for an 11-mile reach of the White River in Indianapolis. The study was done by the U.S. Geological Survey (USGS), Indiana Silver Jackets hazard mitigation taskforce members, the National Weather Service (NWS), the Polis Center, and Indiana University, in cooperation with the City of Indianapolis, the Indianapolis Museum of Art, the Indiana Department of Homeland Security, and the Indiana Department of Natural Resources, Division of Water. The pilot project showed that it is technically feasible to create a flood-inundation map library by means of a two-dimensional hydraulic model, use a map from the library to quickly complete a moderately detailed local flood-loss estimate, and automatically run the hydraulic model during a flood event to provide the maps and flood-damage information through a Web graphical user interface. A library of static digital flood-inundation maps was created by means of a calibrated two-dimensional hydraulic model. Estimated water-surface elevations were developed for a range of river stages referenced to a USGS streamgage and NWS flood forecast point colocated within the study reach. These maps were made available through the Internet in several formats, including geographic information system, Keyhole Markup Language, and Portable Document Format. A flood-loss estimate was completed for part of the study reach by using one of the flood-inundation maps from the static library. The Federal Emergency Management Agency natural disaster-loss estimation program HAZUS-MH, in conjunction with local building information, was used to complete a level 2 analysis of flood-loss estimation. A Service-Oriented Architecture-based dynamic flood-inundation application was developed and was designed to start automatically during a flood, obtain near real-time and forecast data (from the colocated USGS streamgage and NWS flood forecast point within the study reach), run the two-dimensional hydraulic model, and produce flood-inundation maps. The application used local building data and depth-damage curves to estimate flood losses based on the maps, and it served inundation maps and flood-loss estimates through a Web-based graphical user interface.

  5. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  6. Fuel models and fire potential from satellite and surface observations

    USGS Publications Warehouse

    Burgan, R.E.; Klaver, R.W.; Klarer, J.M.

    1998-01-01

    A national 1-km resolution fire danger fuel model map was derived through use of previously mapped land cover classes and ecoregions, and extensive ground sample data, then refined through review by fire managers familiar with various portions of the U.S. The fuel model map will be used in the next generation fire danger rating system for the U.S., but it also made possible immediate development of a satellite and ground based fire potential index map. The inputs and algorithm of the fire potential index are presented, along with a case study of the correlation between the fire potential index and fire occurrence in California and Nevada. Application of the fire potential index in the Mediterranean ecosystems of Spain, Chile, and Mexico will be tested.

  7. A unified theoretical framework for mapping models for the multi-state Hamiltonian.

    PubMed

    Liu, Jian

    2016-11-28

    We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.

  8. Is the recall of verbal-spatial information from working memory affected by symptoms of ADHD?

    PubMed

    Caterino, Linda C; Verdi, Michael P

    2012-10-01

    OJECTIVE: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Participants were presented with either a map-text or text-map condition. The map-text condition led to a significantly higher performance than the text-map condition, overall. However, students who endorsed more symptoms of inattention and hyperactivity-impulsivity scored more poorly when asked to recall text facts, text features, and map features and were less able to correctly place map features on a reconstructed map than were students who endorsed fewer symptoms. The results of the study support the Kulhavy model for typical students; however, the benefit of viewing a display prior to text was not seen for students with ADHD symptoms, thus supporting previous studies that have demonstrated that ADHD appears to negatively affect operations that occur in working memory.

  9. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  10. Heritability estimates for Mycobacterium avium subspecies paratuberculosis status of German Holstein cows tested by fecal culture.

    PubMed

    Küpper, J; Brandt, H; Donat, K; Erhardt, G

    2012-05-01

    The objective of this study was to estimate genetic manifestation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in German Holstein cows. Incorporated into this study were 11,285 German Holstein herd book cows classified as MAP-positive and MAP-negative animals using fecal culture results and originating from 15 farms in Thuringia, Germany involved in a paratuberculosis voluntary control program from 2008 to 2009. The frequency of MAP-positive animals per farm ranged from 2.7 to 67.6%. The fixed effects of farm and lactation number had a highly significant effect on MAP status. An increase in the frequency of positive animals from the first to the third lactation could be observed. Threshold animal and sire models with sire relationship were used as statistical models to estimate genetic parameters. Heritability estimates of fecal culture varied from 0.157 to 0.228. To analyze the effect of prevalence on genetic parameter estimates, the total data set was divided into 2 subsets of data into farms with prevalence rates below 10% and those above 10%. The data set with prevalence above 10% show higher heritability estimates in both models compared with the data set with prevalence below 10%. For all data sets, the sire model shows higher heritabilities than the equivalent animal model. This study demonstrates that genetic variation exists in dairy cattle for paratuberculosis infection susceptibility and furthermore, leads to the conclusion that MAP detection by fecal culture shows a higher genetic background than ELISA test results. In conclusion, fecal culture seems to be a better trait to control the disease, as well as an appropriate feature for further genomic analyses to detect MAP-associated chromosome regions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  12. Use of landsat ETM+ SLC-off segment-based gap-filled imagery for crop type mapping

    USGS Publications Warehouse

    Maxwell, S.K.; Craig, M.E.

    2008-01-01

    Failure of the Scan Line Corrector (SLC) on the Landsat ETM+ sensor has had a major impact on many applications that rely on continuous medium resolution imagery to meet their objectives. The United States Department of Agriculture (USDA) Cropland Data Layer (CDL) program uses Landsat imagery as the primary source of data to produce crop-specific maps for 20 states in the USA. A new method has been developed to fill the image gaps resulting from the SLC failure to support the needs of Landsat users who require coincident spectral data, such as for crop type mapping and monitoring. We tested the new gap-filled method for a CDL crop type mapping project in eastern Nebraska. Scan line gaps were simulated on two Landsat 5 images (spring and late summer 2003) and then gap-filled using landscape boundary models, or segment models, that were derived from 1992 and 2002 Landsat images (used in the gap-fill process). Various date combinations of original and gap-filled images were used to derive crop maps using a supervised classification process. Overall kappa values were slightly higher for crop maps derived from SLC-off gap-filled images compared to crop maps derived from the original imagery (0.3–1.3% higher). Although the age of the segment model used to derive the SLC-off gap-filled product did not negatively impact the overall agreement, differences in individual cover type agreement did increase (−0.8%–1.6% using the 2002 segment model to −5.0–5.1% using the 1992 segment model). Classification agreement also decreased for most of the classes as the size of the segment used in the gap-fill process increased.

  13. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    NASA Astrophysics Data System (ADS)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  14. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  15. High Resolution IRAS Maps and IR Emission of M31 -- II. Diffuse Component and Interstellar Dust

    NASA Technical Reports Server (NTRS)

    Xu, C.; Helou, G.

    1995-01-01

    Large-scale dust heating and cooling in the diffuse medium of M31 is studied using the high resolution (HiRes) IRAS maps in conjunction with UV, optical (UBV), and the HI maps. A dust heating/cooling model is developed based on a radiative transfer model which assumes a 'Sandwich' configuration of dust and stars takes account of the effect of dust grain scattering.

  16. Finite T spectral function of a single carrier injected into an Ising chain: a comparison of 3 different models

    NASA Astrophysics Data System (ADS)

    Moeller, Mirko; Berciu, Mona

    2015-03-01

    When studying the properties of complex, magnetic materials it is often necessary to work with effective Hamiltonians. In many cases the effective Hamiltonian is obtained by mapping the full, multiband Hamiltonian onto a simpler, single band model. A prominent example is the use of Zhang-Rice singlets to map the multiband Emery model for cuprates onto the single band t - J -model. Such mappings are usually done at zero temperature (T) and it is implicitly assumed that they are justified at finite T, as well. We present results on 3 different models of a single charge carrier (electron or hole) injected into a ferromagnetic Ising chain. Model I is a two band, two sublattice model, Model II is a two band, single sublattice model, and Model III is a single band model, the so called t -Jz -model. Due to the absence of spin-flip terms, a numerically exact solution of all 3 Models is possible, even at finite T. At zero T a mapping between all 3 models results in the same low energy physics. However, this is no longer true at finite T. Here the low energy behavior of Model III is significantly different from that of Models I and II. The reasons for this discrepancy and its implications for more realistic models (higher dimension, inclusion of spin-flip terms) are discussed. This work was supported by NSERC, QMI and the UBC 4YF (M.M.).

  17. High resolution linkage maps of the model organism Petunia reveal substantial synteny decay with the related genome of tomato.

    PubMed

    Bossolini, Eligio; Klahre, Ulrich; Brandenburg, Anna; Reinhardt, Didier; Kuhlemeier, Cris

    2011-04-01

    Two linkage maps were constructed for the model plant Petunia. Mapping populations were obtained by crossing the wild species Petunia axillaris subsp. axillaris with Petunia inflata, and Petunia axillaris subsp. parodii with Petunia exserta. Both maps cover the seven chromosomes of Petunia, and span 970 centimorgans (cM) and 700 cM of the genomes, respectively. In total, 207 markers were mapped. Of these, 28 are multilocus amplified fragment length polymorphism (AFLP) markers and 179 are gene-derived markers. For the first time we report on the development and mapping of 83 Petunia microsatellites. The two maps retain the same marker order, but display significant differences of recombination frequencies at orthologous mapping intervals. A complex pattern of genomic rearrangements was detected with the related genome of tomato (Solanum lycopersicum), indicating that synteny between Petunia and other Solanaceae crops has been considerably disrupted. The newly developed markers will facilitate the genetic characterization of mutants and ecological studies on genetic diversity and speciation within the genus Petunia. The maps will provide a powerful tool to link genetic and genomic information and will be useful to support sequence assembly of the Petunia genome.

  18. Comprehensive Evaluation and Analysis of China's Mainstream Online Map Service Websites

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Jiang, J.; Huang, W.; Wang, Q.; Gu, X.

    2012-08-01

    With the flourish development of China's Internet market, all kinds of users for map service demand is rising continually, within it contains tremendous commercial interests. Many internet giants have got involved in the field of online map service, and defined it as an important strategic product of the company. The main purpose of this research is to evaluate these online map service websites comprehensively with a model, and analyse the problems according to the evaluation results. Then some corresponding solving measures are proposed, which provides a theoretical and application guidance for the future development of fiercely competitive online map websites. The research consists of three stages: (a) the mainstream online map service websites in China are introduced and the present situation of them is analysed through visit, investigation, consultant, analysis and research. (b) a whole comprehensive evaluation quota system of online map service websites from the view of functions, layout, interaction design color position and so on, combining with the data indexes such as time efficiency, accuracy, objectivity and authority. (c) a comprehensive evaluation to these online map service websites is proceeded based on the fuzzy evaluation mathematical model, and the difficulty that measure the map websites quantitatively is solved.

  19. Deep Learning the Universe

    NASA Astrophysics Data System (ADS)

    Singh, Shiwangi; Bard, Deborah

    2017-01-01

    Weak gravitational lensing is an effective tool to map the structure of matter in the universe, and has been used for more than ten years as a probe of the nature of dark energy. Beyond the well-established two-point summary statistics, attention is now turning to methods that use the full statistical information available in the lensing observables, through analysis of the reconstructed shear field. This offers an opportunity to take advantage of powerful deep learning methods for image analysis. We present two early studies that demonstrate that deep learning can be used to characterise features in weak lensing convergence maps, and to identify the underlying cosmological model that produced them.We developed an unsupervised Denoising Convolutional Autoencoder model in order to learn an abstract representation directly from our data. This model uses a convolution-deconvolution architecture, which is fed with input data (corrupted with binomial noise to prevent over-fitting). Our model effectively trains itself to minimize the mean-squared error between the input and the output using gradient descent, resulting in a model which, theoretically, is broad enough to tackle other similarly structured problems. Using this model we were able to successfully reconstruct simulated convergence maps and identify the structures in them. We also determined which structures had the highest “importance” - i.e. which structures were most typical of the data. We note that the structures that had the highest importance in our reconstruction were around high mass concentrations, but were highly non-Gaussian.We also developed a supervised Convolutional Neural Network (CNN) for classification of weak lensing convergence maps from two different simulated theoretical models. The CNN uses a softmax classifier which minimizes a binary cross-entropy loss between the estimated distribution and true distribution. In other words, given an unseen convergence map the trained CNN determines probabilistically which theoretical model fits the data best. This preliminary work demonstrates that we can classify the cosmological model that produced the convergence maps with 80% accuracy.

  20. Documenting AUTOGEN and APGEN Model Files

    NASA Technical Reports Server (NTRS)

    Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.

    2008-01-01

    A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.

  1. A comparative analysis of chaotic particle swarm optimizations for detecting single nucleotide polymorphism barcodes.

    PubMed

    Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong

    2016-10-01

    Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Converting Parkinson-Specific Scores into Health State Utilities to Assess Cost-Utility Analysis.

    PubMed

    Chen, Gang; Garcia-Gordillo, Miguel A; Collado-Mateo, Daniel; Del Pozo-Cruz, Borja; Adsuar, José C; Cordero-Ferrera, José Manuel; Abellán-Perpiñán, José María; Sánchez-Martínez, Fernando Ignacio

    2018-06-07

    The aim of this study was to compare the Parkinson's Disease Questionnaire-8 (PDQ-8) with three multi-attribute utility (MAU) instruments (EQ-5D-3L, EQ-5D-5L, and 15D) and to develop mapping algorithms that could be used to transform PDQ-8 scores into MAU scores. A cross-sectional study was conducted. A final sample of 228 evaluable patients was included in the analyses. Sociodemographic and clinical data were also collected. Two EQ-5D questionnaires were scored using Spanish tariffs. Two models and three statistical techniques were used to estimate each model in the direct mapping framework for all three MAU instruments, including the most widely used ordinary least squares (OLS), the robust MM-estimator, and the generalized linear model (GLM). For both EQ-5D-3L and EQ-5D-5L, indirect response mapping based on an ordered logit model was also conducted. Three goodness-of-fit tests were employed to compare the models: the mean absolute error (MAE), the root-mean-square error (RMSE), and the intra-class correlation coefficient (ICC) between the predicted and observed utilities. Health state utility scores ranged from 0.61 (EQ-5D-3L) to 0.74 (15D). The mean PDQ-8 score was 27.51. The correlation between overall PDQ-8 score and each MAU instrument ranged from - 0.729 (EQ-5D-5L) to - 0.752 (EQ-5D-3L). A mapping algorithm based on PDQ-8 items had better performance than using the overall score. For the two EQ-5D questionnaires, in general, the indirect mapping approach had comparable or even better performance than direct mapping based on MAE. Mapping algorithms developed in this study enable the estimation of utility values from the PDQ-8. The indirect mapping equations reported for two EQ-5D questionnaires will further facilitate the calculation of EQ-5D utility scores using other country-specific tariffs.

  3. Temporal similarity perfusion mapping: A standardized and model-free method for detecting perfusion deficits in stroke

    PubMed Central

    Song, Sunbin; Luby, Marie; Edwardson, Matthew A.; Brown, Tyler; Shah, Shreyansh; Cox, Robert W.; Saad, Ziad S.; Reynolds, Richard C.; Glen, Daniel R.; Cohen, Leonardo G.; Latour, Lawrence L.

    2017-01-01

    Introduction Interpretation of the extent of perfusion deficits in stroke MRI is highly dependent on the method used for analyzing the perfusion-weighted signal intensity time-series after gadolinium injection. In this study, we introduce a new model-free standardized method of temporal similarity perfusion (TSP) mapping for perfusion deficit detection and test its ability and reliability in acute ischemia. Materials and methods Forty patients with an ischemic stroke or transient ischemic attack were included. Two blinded readers compared real-time generated interactive maps and automatically generated TSP maps to traditional TTP/MTT maps for presence of perfusion deficits. Lesion volumes were compared for volumetric inter-rater reliability, spatial concordance between perfusion deficits and healthy tissue and contrast-to-noise ratio (CNR). Results Perfusion deficits were correctly detected in all patients with acute ischemia. Inter-rater reliability was higher for TSP when compared to TTP/MTT maps and there was a high similarity between the lesion volumes depicted on TSP and TTP/MTT (r(18) = 0.73). The Pearson's correlation between lesions calculated on TSP and traditional maps was high (r(18) = 0.73, p<0.0003), however the effective CNR was greater for TSP compared to TTP (352.3 vs 283.5, t(19) = 2.6, p<0.03.) and MTT (228.3, t(19) = 2.8, p<0.03). Discussion TSP maps provide a reliable and robust model-free method for accurate perfusion deficit detection and improve lesion delineation compared to traditional methods. This simple method is also computationally faster and more easily automated than model-based methods. This method can potentially improve the speed and accuracy in perfusion deficit detection for acute stroke treatment and clinical trial inclusion decision-making. PMID:28973000

  4. Mapping Venus: Modeling the Magellan Mission.

    ERIC Educational Resources Information Center

    Richardson, Doug

    1997-01-01

    Provides details of an activity designed to help students understand the relationship between astronomy and geology. Applies concepts of space research and map-making technology to the construction of a topographic map of a simulated section of Venus. (DDR)

  5. Binational digital soils map of the Ambos Nogales watershed, southern Arizona and northern Sonora, Mexico

    USGS Publications Warehouse

    Norman, Laura

    2004-01-01

    We have prepared a digital map of soil parameters for the international Ambos Nogales watershed to use as input for selected soils-erosion models. The Ambos Nogales watershed in southern Arizona and northern Sonora, Mexico, contains the Nogales wash, a tributary of the Upper Santa Cruz River. The watershed covers an area of 235 km2, just under half of which is in Mexico. Preliminary investigations of potential erosion revealed a discrepancy in soils data and mapping across the United States-Mexican border due to issues including different mapping resolutions, incompatible formatting, and varying nomenclature and classification systems. To prepare a digital soils map appropriate for input to a soils-erosion model, the historical analog soils maps for Nogales, Ariz., were scanned and merged with the larger-scale digital soils data available for Nogales, Sonora, Mexico using a geographic information system.

  6. Delineation and segmentation of cerebral tumors by mapping blood-brain barrier disruption with dynamic contrast-enhanced CT and tracer kinetics modeling-a feasibility study.

    PubMed

    Bisdas, S; Yang, X; Lim, C C T; Vogl, T J; Koh, T S

    2008-01-01

    Dynamic contrast-enhanced (DCE) imaging is a promising approach for in vivo assessment of tissue microcirculation. Twenty patients with clinical and routine computed tomography (CT) evidence of intracerebral neoplasm were examined with DCE-CT imaging. Using a distributed-parameter model for tracer kinetics modeling of DCE-CT data, voxel-level maps of cerebral blood flow (F), intravascular blood volume (vi) and intravascular mean transit time (t1) were generated. Permeability-surface area product (PS), extravascular extracellular blood volume (ve) and extraction ratio (E) maps were also calculated to reveal pathologic locations of tracer extravasation, which are indicative of disruptions in the blood-brain barrier (BBB). All maps were visually assessed for quality of tumor delineation and measurement of tumor extent by two radiologists. Kappa (kappa) coefficients and their 95% confidence intervals (CI) were calculated to determine the interobserver agreement for each DCE-CT map. There was a substantial agreement for the tumor delineation quality in the F, ve and t1 maps. The agreement for the quality of the tumor delineation was excellent for the vi, PS and E maps. Concerning the measurement of tumor extent, excellent and nearly excellent agreement was achieved only for E and PS maps, respectively. According to these results, we performed a segmentation of the cerebral tumors on the base of the E maps. The interobserver agreement for the tumor extent quantification based on manual segmentation of tumor in the E maps vs. the computer-assisted segmentation was excellent (kappa = 0.96, CI: 0.93-0.99). The interobserver agreement for the tumor extent quantification based on computer segmentation in the mean images and the E maps was substantial (kappa = 0.52, CI: 0.42-0.59). This study illustrates the diagnostic usefulness of parametric maps associated with BBB disruption on a physiology-based approach and highlights the feasibility for automatic segmentation of cerebral tumors.

  7. Recent development in preparation of European soil hydraulic maps

    NASA Astrophysics Data System (ADS)

    Toth, B.; Weynants, M.; Pasztor, L.; Hengl, T.

    2017-12-01

    Reliable quantitative information on soil hydraulic properties is crucial for modelling hydrological, meteorological, ecological and biological processes of the Critical Zone. Most of the Earth system models need information on soil moisture retention capacity and hydraulic conductivity in the full matric potential range. These soil hydraulic properties can be quantified, but their measurement is expensive and time consuming, therefore measurement-based catchment scale mapping of these soil properties is not possible. The increasing availability of soil information and methods describing relationships between simple soil characteristics and soil hydraulic properties provide the possibility to derive soil hydraulic maps based on spatial soil datasets and pedotransfer functions (PTFs). Over the last decade there has been a significant development in preparation of soil hydraulic maps. Spatial datasets on model parameters describing the soil hydraulic processes have become available for countries, continents and even for the whole globe. Our aim is to present European soil hydraulic maps, show their performance, highlight their advantages and drawbacks, and propose possible ways to further improve the performance of those.

  8. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  9. Land cover map for map zones 8 and 9 developed from SAGEMAP, GNN, and SWReGAP: a pilot for NWGAP

    Treesearch

    James S. Kagan; Janet L. Ohmann; Matthew Gregory; Claudine Tobalske

    2008-01-01

    As part of the Northwest Gap Analysis Project, land cover maps were generated for most of eastern Washington and eastern Oregon. The maps were derived from regional SAGEMAP and SWReGAP data sets using decision tree classifiers for nonforest areas, and Gradient Nearest Neighbor imputation modeling for forests and woodlands. The maps integrate data from regional...

  10. A Probabilistic Strategy for Understanding Action Selection

    PubMed Central

    Kim, Byounghoon; Basso, Michele A.

    2010-01-01

    Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560

  11. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    NASA Astrophysics Data System (ADS)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.

  12. Route-choice modeling using GPS-based travel surveys.

    DOT National Transportation Integrated Search

    2013-06-01

    The advent of GPS-based travel surveys offers an opportunity to develop empirically-rich route-choice models. However, the GPS traces must first be mapped to the roadway network, map-matching, to identify the network-links actually traversed. For thi...

  13. A method for vreating a three dimensional model from published geologic maps and cross sections

    USGS Publications Warehouse

    Walsh, Gregory J.

    2009-01-01

    This brief report presents a relatively inexpensive and rapid method for creating a 3D model of geology from published quadrangle-scale maps and cross sections using Google Earth and Google SketchUp software. An example from the Green Mountains of Vermont, USA, is used to illustrate the step by step methods used to create such a model. A second example is provided from the Jebel Saghro region of the Anti-Atlas Mountains of Morocco. The report was published to help enhance the public?s ability to use and visualize geologic map data.

  14. Exact mapping of the 2+1 Dirac oscillator onto the Jaynes-Cummings model: Ion-trap experimental proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bermudez, A.; Martin-Delgado, M. A.; Solano, E.

    2007-10-15

    We study the dynamics of the 2+1 Dirac oscillator exactly and find spin oscillations due to a Zitterbewegung of purely relativistic origin. We find an exact mapping of this quantum-relativistic system onto a Jaynes-Cummings model, describing the interaction of a two-level atom with a quantized single-mode field. This equivalence allows us to map a series of quantum optical phenomena onto the relativistic oscillator and vice versa. We make a realistic experimental proposal, in reach with current technology, for studying the equivalence of both models using a single trapped ion.

  15. Utilization of geoinformation tools for the development of forest fire hazard mapping system: example of Pekan fire, Malaysia

    NASA Astrophysics Data System (ADS)

    Mahmud, Ahmad Rodzi; Setiawan, Iwan; Mansor, Shattri; Shariff, Abdul Rashid Mohamed; Pradhan, Biswajeet; Nuruddin, Ahmed

    2009-12-01

    A study in modeling fire hazard assessment will be essential in establishing an effective forest fire management system especially in controlling and preventing peat fire. In this paper, we have used geographic information system (GIS), in combination with other geoinformation technologies such as remote sensing and computer modeling, for all aspects of wild land fire management. Identifying areas that have a high probability of burning is an important component of fire management planning. The development of spatially explicit GIS models has greatly facilitated this process by allowing managers to map and analyze variables contributing to fire occurrence across large, unique geographic units. Using the model and its associated software engine, the fire hazard map was produced. Extensive avenue programming scripts were written to provide additional capabilities in the development of these interfaces to meet the full complement of operational software considering various users requirements. The system developed not only possesses user friendly step by step operations to deliver the fire vulnerability mapping but also allows authorized users to edit, add or modify parameters whenever necessary. Results from the model can support fire hazard mapping in the forest and enhance alert system function by simulating and visualizing forest fire and helps for contingency planning.

  16. Concept mapping as an approach for expert-guided model building: The example of health literacy.

    PubMed

    Soellner, Renate; Lenartz, Norbert; Rudinger, Georg

    2017-02-01

    Concept mapping served as the starting point for the aim of capturing the comprehensive structure of the construct of 'health literacy.' Ideas about health literacy were generated by 99 experts and resulted in 105 statements that were subsequently organized by 27 experts in an unstructured card sorting. Multidimensional scaling was applied to the sorting data and a two and three-dimensional solution was computed. The three dimensional solution was used in subsequent cluster analysis and resulted in a concept map of nine "clusters": (1) self-regulation, (2) self-perception, (3) proactive approach to health, (4) basic literacy and numeracy skills, (5) information appraisal, (6) information search, (7) health care system knowledge and acting, (8) communication and cooperation, and (9) beneficial personality traits. Subsequently, this concept map served as a starting point for developing a "qualitative" structural model of health literacy and a questionnaire for the measurement of health literacy. On the basis of questionnaire data, a "quantitative" structural model was created by first applying exploratory factor analyses (EFA) and then cross-validating the model with confirmatory factor analyses (CFA). Concept mapping proved to be a highly valuable tool for the process of model building up to translational research in the "real world". Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. GIS-based realization of international standards for digital geological mapping - developments in planetary mapping

    NASA Astrophysics Data System (ADS)

    Nass, Andrea; van Gasselt, Stephan; Jaumann, Ralf

    2010-05-01

    The Helmholtz Alliance and the European Planetary Network are research communities with different main topics. One of the main research topics which are shared by these communities is the question about the geomorphological evolutions of planetary surfaces as well as the geological context of life. This research contains questions like "Is there volcanic activity on a planet?" or "Where are possible landing sites?". In order to help answering such questions, analyses of surface features and morphometric measurements need to be performed. This ultimately leads to the generation of thematic maps (e.g. geological and geomorphologic maps) as a basis for the further studies. By using modern GIS techniques the comparative work and generalisation during mapping processes results in new information. These insights are crucial for subsequent investigations. Therefore, the aim is to make these results available to the research community as a secondary data basis. In order to obtain a common and interoperable data collection results of different mapping projects have to follow a standardised data-infrastructure, metadata definition and map layout. Therefore, we are currently focussing on the generation of a database model arranging all data and processes in a uniform mapping schema. With the help of such a schema, the mapper will be able to utilise a predefined (but customisable) GIS environment with individual tool items as well as a standardised symbolisation and a metadata environment. This environment is based on a data model which is currently on a conceptual level and provides the layout of the data infrastructure including relations and topologies. One of the first tasks towards this data model is the definition of a consistent basis of symbolisation standards developed for planetary mapping. The mapper/geologist will be able to access the pre-built signatures and utilise these in scale dependence within the mapping project. The symbolisation will be related to the data model in the next step. As second task, we designed a concept for description of the digital mapping result. Therefore, we are creating a metadata template based on existing standards for individual needs in planetary sciences. This template is subdivided in (meta) data about the general map content (e.g. on which data the mapping result based on) and in metadata for each individual mapping element/layer comprising information like minimum mapping scale, interpretation hints, etc. The assignment of such a metadata description in combination with the usage of a predefined mapping schema facilitates the efficient and traceable storage of data information on a network server and enables a subsequent representation, e.g. as a mapserver data structure. Acknowledgement: This work is partly supported by DLR and the Helmholtz Alliance "Planetary Evolution and Life".

  18. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-01

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  19. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Denguemore » and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.« less

  20. Forgotten forests--issues and prospects in biome mapping using Seasonally Dry Tropical Forests as a case study.

    PubMed

    Särkinen, Tiina; Iganci, João R V; Linares-Palomino, Reynaldo; Simon, Marcelo F; Prado, Darién E

    2011-11-24

    South America is one of the most species diverse continents in the world. Within South America diversity is not distributed evenly at both local and continental scales and this has led to the recognition of various areas with unique species assemblages. Several schemes currently exist which divide the continental-level diversity into large species assemblages referred to as biomes. Here we review five currently available biome maps for South America, including the WWF Ecoregions, the Americas basemap, the Land Cover Map of South America, Morrone's Biogeographic regions of Latin America, and the Ecological Systems Map. The comparison is performed through a case study on the Seasonally Dry Tropical Forest (SDTF) biome using herbarium data of habitat specialist species. Current biome maps of South America perform poorly in depicting SDTF distribution. The poor performance of the maps can be attributed to two main factors: (1) poor spatial resolution, and (2) poor biome delimitation. Poor spatial resolution strongly limits the use of some of the maps in GIS applications, especially for areas with heterogeneous landscape such as the Andes. Whilst the Land Cover Map did not suffer from poor spatial resolution, it showed poor delimitation of biomes. The results highlight that delimiting structurally heterogeneous vegetation is difficult based on remote sensed data alone. A new refined working map of South American SDTF biome is proposed, derived using the Biome Distribution Modelling (BDM) approach where georeferenced herbarium data is used in conjunction with bioclimatic data. Georeferenced specimen data play potentially an important role in biome mapping. Our study shows that herbarium data could be used as a way of ground-truthing biome maps in silico. The results also illustrate that herbarium data can be used to model vegetation maps through predictive modelling. The BDM approach is a promising new method in biome mapping, and could be particularly useful for mapping poorly known, fragmented, or degraded vegetation. We wish to highlight that biome delimitation is not an exact science, and that transparency is needed on how biomes are used as study units in macroevolutionary and ecological research.

  1. Forgotten forests - issues and prospects in biome mapping using Seasonally Dry Tropical Forests as a case study

    PubMed Central

    2011-01-01

    Background South America is one of the most species diverse continents in the world. Within South America diversity is not distributed evenly at both local and continental scales and this has led to the recognition of various areas with unique species assemblages. Several schemes currently exist which divide the continental-level diversity into large species assemblages referred to as biomes. Here we review five currently available biome maps for South America, including the WWF Ecoregions, the Americas basemap, the Land Cover Map of South America, Morrone's Biogeographic regions of Latin America, and the Ecological Systems Map. The comparison is performed through a case study on the Seasonally Dry Tropical Forest (SDTF) biome using herbarium data of habitat specialist species. Results Current biome maps of South America perform poorly in depicting SDTF distribution. The poor performance of the maps can be attributed to two main factors: (1) poor spatial resolution, and (2) poor biome delimitation. Poor spatial resolution strongly limits the use of some of the maps in GIS applications, especially for areas with heterogeneous landscape such as the Andes. Whilst the Land Cover Map did not suffer from poor spatial resolution, it showed poor delimitation of biomes. The results highlight that delimiting structurally heterogeneous vegetation is difficult based on remote sensed data alone. A new refined working map of South American SDTF biome is proposed, derived using the Biome Distribution Modelling (BDM) approach where georeferenced herbarium data is used in conjunction with bioclimatic data. Conclusions Georeferenced specimen data play potentially an important role in biome mapping. Our study shows that herbarium data could be used as a way of ground-truthing biome maps in silico. The results also illustrate that herbarium data can be used to model vegetation maps through predictive modelling. The BDM approach is a promising new method in biome mapping, and could be particularly useful for mapping poorly known, fragmented, or degraded vegetation. We wish to highlight that biome delimitation is not an exact science, and that transparency is needed on how biomes are used as study units in macroevolutionary and ecological research. PMID:22115315

  2. On integrability of the Yang-Baxter {sigma}-model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimcik, Ctirad

    2009-04-15

    We prove that the recently introduced Yang-Baxter {sigma}-model can be considered as an integrable deformation of the principal chiral model. We find also an explicit one-to-one map transforming every solution of the principal chiral model into a solution of the deformed model. With the help of this map, the standard procedure of the dressing of the principal chiral solutions can be directly transferred into the deformed Yang-Baxter context.

  3. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  4. Advances and Limitations of Disease Biogeography Using Ecological Niche Modeling

    PubMed Central

    Escobar, Luis E.; Craft, Meggan E.

    2016-01-01

    Mapping disease transmission risk is crucial in public and animal health for evidence based decision-making. Ecology and epidemiology are highly related disciplines that may contribute to improvements in mapping disease, which can be used to answer health related questions. Ecological niche modeling is increasingly used for understanding the biogeography of diseases in plants, animals, and humans. However, epidemiological applications of niche modeling approaches for disease mapping can fail to generate robust study designs, producing incomplete or incorrect inferences. This manuscript is an overview of the history and conceptual bases behind ecological niche modeling, specifically as applied to epidemiology and public health; it does not pretend to be an exhaustive and detailed description of ecological niche modeling literature and methods. Instead, this review includes selected state-of-the-science approaches and tools, providing a short guide to designing studies incorporating information on the type and quality of the input data (i.e., occurrences and environmental variables), identification and justification of the extent of the study area, and encourages users to explore and test diverse algorithms for more informed conclusions. We provide a friendly introduction to the field of disease biogeography presenting an updated guide for researchers looking to use ecological niche modeling for disease mapping. We anticipate that ecological niche modeling will soon be a critical tool for epidemiologists aiming to map disease transmission risk, forecast disease distribution under climate change scenarios, and identify landscape factors triggering outbreaks. PMID:27547199

  5. Advances and Limitations of Disease Biogeography Using Ecological Niche Modeling.

    PubMed

    Escobar, Luis E; Craft, Meggan E

    2016-01-01

    Mapping disease transmission risk is crucial in public and animal health for evidence based decision-making. Ecology and epidemiology are highly related disciplines that may contribute to improvements in mapping disease, which can be used to answer health related questions. Ecological niche modeling is increasingly used for understanding the biogeography of diseases in plants, animals, and humans. However, epidemiological applications of niche modeling approaches for disease mapping can fail to generate robust study designs, producing incomplete or incorrect inferences. This manuscript is an overview of the history and conceptual bases behind ecological niche modeling, specifically as applied to epidemiology and public health; it does not pretend to be an exhaustive and detailed description of ecological niche modeling literature and methods. Instead, this review includes selected state-of-the-science approaches and tools, providing a short guide to designing studies incorporating information on the type and quality of the input data (i.e., occurrences and environmental variables), identification and justification of the extent of the study area, and encourages users to explore and test diverse algorithms for more informed conclusions. We provide a friendly introduction to the field of disease biogeography presenting an updated guide for researchers looking to use ecological niche modeling for disease mapping. We anticipate that ecological niche modeling will soon be a critical tool for epidemiologists aiming to map disease transmission risk, forecast disease distribution under climate change scenarios, and identify landscape factors triggering outbreaks.

  6. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    PubMed

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  7. Fusion of Terrestrial and Airborne Laser Data for 3D modeling Applications

    NASA Astrophysics Data System (ADS)

    Mohammed, Hani Mahmoud

    This thesis deals with the 3D modeling phase of the as-built large BIM projects. Among several means of BIM data capturing, such as photogrammetric or range tools, laser scanners have been one of the most efficient and practical tool for a long time. They can generate point clouds with high resolution for 3D models that meet nowadays' market demands. The current 3D modeling projects of as-built BIMs are mainly focused on using one type of laser scanner data, such as Airborne or Terrestrial. According to the literatures, no significant (few) efforts were made towards the fusion of heterogeneous laser scanner data despite its importance. The importance of the fusion of heterogeneous data arises from the fact that no single type of laser data can provide all the information about BIM, especially for large BIM projects that are existing on a large area, such as university buildings, or Heritage places. Terrestrial laser scanners are able to map facades of buildings and other terrestrial objects. However, they lack the ability to map roofs or higher parts in the BIM project. Airborne laser scanner on the other hand, can map roofs of the buildings efficiently and can map only small part of the facades. Short range laser scanners can map the interiors of the BIM projects, while long range scanners are used for mapping wide exterior areas in BIM projects. In this thesis the long range laser scanner data obtained in the Stop-and-Go mapping mode, the short range laser scanner data, obtained in a fully static mapping mode, and the airborne laser data are all fused together to bring a complete effective solution for a large BIM project. Working towards the 3D modeling of BIM projects, the thesis framework starts with the registration of the data, where a new fast automatic registration algorithm were developed. The next step is to recognize the different objects in the BIM project (classification), and obtain 3D models for the buildings. The last step is the development of an occlusion removal algorithm to efficiently retain parts of the buildings occluded by surrounding objects such as trees, vehicles, or street poles.

  8. Qualitative landslide susceptibility assessment by multicriteria analysis: A case study from San Antonio del Sur, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Castellanos Abella, Enrique A.; Van Westen, Cees J.

    Geomorphological information can be combined with decision-support tools to assess landslide hazard and risk. A heuristic model was applied to a rural municipality in eastern Cuba. The study is based on a terrain mapping units (TMU) map, generated at 1:50,000 scale by interpretation of aerial photos, satellite images and field data. Information describing 603 terrain units was collected in a database. Landslide areas were mapped in detail to classify the different failure types and parts. Three major landslide regions are recognized in the study area: coastal hills with rockfalls, shallow debris flows and old rotational rockslides denudational slopes in limestone, with very large deep-seated rockslides related to tectonic activity and the Sierra de Caujerí scarp, with large rockslides. The Caujerí scarp presents the highest hazard, with recent landslides and various signs of active processes. The different landforms and the causative factors for landslides were analyzed and used to develop the heuristic model. The model is based on weights assigned by expert judgment and organized in a number of components such as slope angle, internal relief, slope shape, geological formation, active faults, distance to drainage, distance to springs, geomorphological subunits and existing landslide zones. From these variables a hierarchical heuristic model was applied in which three levels of weights were designed for classes, variables, and criteria. The model combines all weights into a single hazard value for each pixel of the landslide hazard map. The hazard map was then divided by two scales, one with three classes for disaster managers and one with 10 detailed hazard classes for technical staff. The range of weight values and the number of existing landslides is registered for each class. The resulting increasing landslide density with higher hazard classes indicates that the output map is reliable. The landslide hazard map was used in combination with existing information on buildings and infrastructure to prepare a qualitative risk map. The complete lack of historical landslide information and geotechnical data precludes the development of quantitative deterministic or probabilistic models.

  9. Mapping urban geology of the city of Girona, Catalonia

    NASA Astrophysics Data System (ADS)

    Vilà, Miquel; Torrades, Pau; Pi, Roser; Monleon, Ona

    2016-04-01

    A detailed and systematic geological characterization of the urban area of Girona has been conducted under the project '1:5000 scale Urban geological map of Catalonia' of the Catalan Geological Survey (Institut Cartogràfic i Geològic de Catalunya). The results of this characterization are organized into: i) a geological information system that includes all the information acquired; ii) a stratigraphic model focused on identification, characterization and correlation of the geological materials and structures present in the area and; iii) a detailed geological map that represents a synthesis of all the collected information. The mapping project integrates in a GIS environment pre-existing cartographic documentation (geological and topographical), core data from compiled boreholes, descriptions of geological outcrops within the urban network and neighbouring areas, physico-chemical characterisation of representative samples of geological materials, detailed geological mapping of Quaternary sediments, subsurface bedrock and artificial deposits and, 3D modelling of the main geological surfaces. The stratigraphic model is structured in a system of geological units that from a chronostratigrafic point of view are structured in Palaeozoic, Paleogene, Neogene, Quaternary and Anthropocene. The description of the geological units is guided by a systematic procedure. It includes the main lithological and structural features of the units that constitute the geological substratum and represents the conceptual base of the 1:5000 urban geological map of the Girona metropolitan area, which is organized into 6 map sheets. These map sheets are composed by a principal map, geological cross sections and, several complementary maps, charts and tables. Regardless of the geological map units, the principal map also represents the main artificial deposits, features related to geohistorical processes, contours of outcrop areas, information obtained in stations, borehole data, and contour lines of the top of the pre-Quaternary basement surface. The most representative complementary maps are the quaternary map, the subsurface bedrock map and the isopach map of thickness of superficial deposits (Quaternary and anthropogenic). The map sheets also include charts and tables of relevant physic-chemical parameters of the geological materials, harmonized downhole lithological columns from selected boreholes, stratigraphic columns, and, photographs and figures illustrating the geology of the mapped area and how urbanization has changed the natural environment. The development of systematic urban geological mapping projects, such as the example of Girona's case, which provides valuable resources to address targeted studies related to urban planning, geoengineering works, soil pollution and other important environmental issues that society should deal with in the future.

  10. Implications of allometric model selection for county-level biomass mapping.

    PubMed

    Duncanson, Laura; Huang, Wenli; Johnson, Kristofer; Swatantran, Anu; McRoberts, Ronald E; Dubayah, Ralph

    2017-10-18

    Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1): 12-35, 2003), Chojnacky et al. (Forestry 87(1): 129-151, 2014) and the US Forest Service's Component Ratio Method (CRM). We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25-45 m). The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m), while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more appropriate for biomass mapping with lidar. These results confirm that allometric model selection considerably impacts biomass maps and estimates, and that allometric model errors remain poorly understood. Our findings that allometric model discrepancies are not explained by lidar heights suggests that allometric model form does not drive these discrepancies. A better understanding of the sources of allometric model errors, particularly in high biomass systems, is essential for improved forest biomass mapping.

  11. Mapping landslide susceptibility using data-driven methods.

    PubMed

    Zêzere, J L; Pereira, S; Melo, R; Oliveira, S C; Garcia, R A C

    2017-07-01

    Most epistemic uncertainty within data-driven landslide susceptibility assessment results from errors in landslide inventories, difficulty in identifying and mapping landslide causes and decisions related with the modelling procedure. In this work we evaluate and discuss differences observed on landslide susceptibility maps resulting from: (i) the selection of the statistical method; (ii) the selection of the terrain mapping unit; and (iii) the selection of the feature type to represent landslides in the model (polygon versus point). The work is performed in a single study area (Silveira Basin - 18.2km 2 - Lisbon Region, Portugal) using a unique database of geo-environmental landslide predisposing factors and an inventory of 82 shallow translational slides. The logistic regression, the discriminant analysis and two versions of the information value were used and we conclude that multivariate statistical methods perform better when computed over heterogeneous terrain units and should be selected to assess landslide susceptibility based on slope terrain units, geo-hydrological terrain units or census terrain units. However, evidence was found that the chosen terrain mapping unit can produce greater differences on final susceptibility results than those resulting from the chosen statistical method for modelling. The landslide susceptibility should be assessed over grid cell terrain units whenever the spatial accuracy of landslide inventory is good. In addition, a single point per landslide proved to be efficient to generate accurate landslide susceptibility maps, providing the landslides are of small size, thus minimizing the possible existence of heterogeneities of predisposing factors within the landslide boundary. Although during last years the ROC curves have been preferred to evaluate the susceptibility model's performance, evidence was found that the model with the highest AUC ROC is not necessarily the best landslide susceptibility model, namely when terrain mapping units are heterogeneous in size and reduced in number. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Development of a high-resolution binational vegetation map of the Santa Cruz River riparian corridor and surrounding watershed, southern Arizona and northern Sonora, Mexico

    USGS Publications Warehouse

    Wallace, Cynthia S.A.; Villarreal, Miguel L.; Norman, Laura M.

    2011-01-01

    This report summarizes the development of a binational vegetation map developed for the Santa Cruz Watershed, which straddles the southern border of Arizona and the northern border of Sonora, Mexico. The map was created as an environmental input to the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM) that is being created by the U.S. Geological Survey for the watershed. The SCWEPM is a map-based multicriteria evaluation tool that allows stakeholders to explore tradeoffs between valued ecosystem services at multiple scales within a participatory decision-making process. Maps related to vegetation type and are needed for use in modeling wildlife habitat and other ecosystem services. Although detailed vegetation maps existed for the U.S. side of the border, there was a lack of consistent data for the Santa Cruz Watershed in Mexico. We produced a binational vegetation classification of the Santa Cruz River riparian habitat and watershed vegetation based on NatureServe Terrestrial Ecological Systems (TES) units using Classification And Regression Tree (CART) modeling. Environmental layers used as predictor data were derived from a seasonal set of Landsat Thematic Mapper (TM) images (spring, summer, and fall) and from a 30-meter digital-elevation-model (DEM) grid. Because both sources of environmental data are seamless across the international border, they are particularly suited to this binational modeling effort. Training data were compiled from existing field data for the riparian corridor and data collected by the NM-GAP (New Mexico Gap Analysis Project) team for the original Southwest Regional Gap Analysis Project (SWReGAP) modeling effort. Additional training data were collected from core areas of the SWReGAP classification itself, allowing the extrapolation of the SWReGAP mapping into the Mexican portion of the watershed without collecting additional training data.

  13. Comparing Computer-Supported Dynamic Modeling and "Paper & Pencil" Concept Mapping Technique in Students' Collaborative Activity

    ERIC Educational Resources Information Center

    Komis, Vassilis; Ergazaki, Marida; Zogza, Vassiliki

    2007-01-01

    This study aims at highlighting the collaborative activity of two high school students (age 14) in the cases of modeling the complex biological process of plant growth with two different tools: the "paper & pencil" concept mapping technique and the computer-supported educational environment "ModelsCreator". Students' shared activity in both cases…

  14. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  15. Evolutionary Maps: A New Model for the Analysis of Conceptual Development, with Application to the Diurnal Cycle

    ERIC Educational Resources Information Center

    Navarro, Manuel

    2014-01-01

    This paper presents a model of how children generate concrete concepts from perception through processes of differentiation and integration. The model informs the design of a novel methodology ("evolutionary maps" or "emaps"), whose implementation on certain domains unfolds the web of itineraries that children may follow in the…

  16. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation by determining high and very high groundwater potential zones.

  17. ReMatch: a web-based tool to construct, store and share stoichiometric metabolic models with carbon maps for metabolic flux analysis.

    PubMed

    Pitkänen, Esa; Akerlund, Arto; Rantanen, Ari; Jouhten, Paula; Ukkonen, Esko

    2008-08-25

    ReMatch is a web-based, user-friendly tool that constructs stoichiometric network models for metabolic flux analysis, integrating user-developed models into a database collected from several comprehensive metabolic data resources, including KEGG, MetaCyc and CheBI. Particularly, ReMatch augments the metabolic reactions of the model with carbon mappings to facilitate (13)C metabolic flux analysis. The construction of a network model consisting of biochemical reactions is the first step in most metabolic modelling tasks. This model construction can be a tedious task as the required information is usually scattered to many separate databases whose interoperability is suboptimal, due to the heterogeneous naming conventions of metabolites in different databases. Another, particularly severe data integration problem is faced in (13)C metabolic flux analysis, where the mappings of carbon atoms from substrates into products in the model are required. ReMatch has been developed to solve the above data integration problems. First, ReMatch matches the imported user-developed model against the internal ReMatch database while considering a comprehensive metabolite name thesaurus. This, together with wild card support, allows the user to specify the model quickly without having to look the names up manually. Second, ReMatch is able to augment reactions of the model with carbon mappings, obtained either from the internal database or given by the user with an easy-touse tool. The constructed models can be exported into 13C-FLUX and SBML file formats. Further, a stoichiometric matrix and visualizations of the network model can be generated. The constructed models of metabolic networks can be optionally made available to the other users of ReMatch. Thus, ReMatch provides a common repository for metabolic network models with carbon mappings for the needs of metabolic flux analysis community. ReMatch is freely available for academic use at http://www.cs.helsinki.fi/group/sysfys/software/rematch/.

  18. Simulation of boreal Summer Monsoon Rainfall using CFSV2_SSiB model: sensitivity to Land Use Land Cover (LULC)

    NASA Astrophysics Data System (ADS)

    Chilukoti, N.; Xue, Y.

    2016-12-01

    The land surface play a vital role in determining the surface energy budget, accurate representation of land use and land cover (LULC) is necessary to improve forecast. In this study, we have investigated the influence of surface vegetation maps with different LULC on simulating the boreal summer monsoon rainfall. Using a National Centres for Environmental Prediction (NCEP) Coupled Forecast System version 2(CFSv2) model coupled with Simplified Simple Biosphere (SSiB) model, two experiments were conducted: one with old vegetation map and one with new vegetation map. The significant differences between new and old vegetation map were in semi-arid and arid areas. For example, in old map Tibetan plateau classified as desert, which is not appropriate, while in new map it was classified as grasslands or shrubs with bare soil. Old map classified the Sahara desert as a bare soil and shrubs with bare soil, whereas in new map it was classified as bare ground. In addition to central Asia and the Sahara desert, in new vegetation map, Europe had more cropped area and India's vegetation cover was changed from crops and forests to wooded grassland and small areas of grassland and shrubs. The simulated surface air temperature with new map shows a significant improvement over Asia, South Africa, and northern America by some 1 to 2ºC and 2 to 3ºC over north east China and these are consistent with the reduced rainfall biases over Africa, near Somali coast, north east India, Bangladesh, east China sea, eastern Pacific and northern USA. Over Indian continent and bay of Bengal dry rainfall anomalies that is the only area showing large dry rainfall bias, however, they were unchanged with new map simulation. Overall the CFSv2(coupled with SSiB) model with new vegetation map show a promising result in improving the monsoon forecast by improving the Land -Atmosphere interactions. To compare with the LULC forcing, experiment was conducted using the Global Forecast System (GFS) simulations forced with different observed Sea Surface Temperatures (SST) for the same period: one is from NCEP reanalysis and one from Hadley Center. They have substantial difference in Indian Ocean. Preliminary analysis shows that, the impact of these two SST data sets on Indian summer monsoon rainfall has no significant impact.

  19. VMF3/GPT3: refined discrete and empirical troposphere mapping functions

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Böhm, Johannes

    2018-04-01

    Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of 3°. Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a 5°× 5° as well as a 1°× 1° global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.

  20. Time series evapotranspiration maps at a regional scale: A methodology, evaluation, and their use in water resources management

    NASA Astrophysics Data System (ADS)

    Gowda, P. H.

    2016-12-01

    Evapotranspiration (ET) is an important process in ecosystems' water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. There are efforts to develop such datasets on a regional to global scale but often faced with the limitations of spatial-temporal resolution tradeoffs in satellite remote sensing technology. In this study, we developed frameworks for generating high and medium resolution daily ET maps from Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data, respectively. For developing high resolution (30-m) daily time series ET maps with Landsat TM data, the series version of Two Source Energy Balance (TSEB) model was used to compute sensible and latent heat fluxes of soil and canopy separately. Landsat 5 (2000-2011) and Landsat 8 (2013-2014) imageries for row 28/35 and 27/36 covering central Oklahoma was used. MODIS data (2001-2014) covering Oklahoma and Texas Panhandle was used to develop medium resolution (250-m), time series daily ET maps with SEBS (Surface Energy Balance System) model. An extensive network of weather stations managed by Texas High Plains ET Network and Oklahoma Mesonet was used to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. A linear interpolation sub-model was used to estimate the daily ET between the image acquisition days. Accuracy assessment of daily ET maps were done against eddy covariance data from two grassland sites at El Reno, OK. Statistical results indicated good performance by modeling frameworks developed for deriving time series ET maps. Results indicated that the proposed ET mapping framework is suitable for deriving daily time series ET maps at regional scale with Landsat and MODIS data.

  1. Meteorological Effects of Land Cover Changes in Hungary during the 20th Century

    NASA Astrophysics Data System (ADS)

    Drüszler, Á.; Vig, P.; Csirmaz, K.

    2012-04-01

    Geological, paleontological and geomorphologic studies show that the Earth's climate has always been changing since it came into existence. The climate change itself is self-evident. Therefore the far more serious question is how much does mankind strengthen or weaken these changes beyond the natural fluctuation and changes of climate. The aim of the present study was to restore the historical land cover changes and to simulate the meteorological consequences of these changes. Two different land cover maps for Hungary were created in vector data format using GIS technology. The land cover map for 1900 was reconstructed based on statistical data and two different historical maps: the derived map of the 3rd Military Mapping Survey of the Austro-Hungarian Empire and the Synoptic Forestry Map of the Kingdom of Hungary. The land cover map for 2000 was derived from the CORINE land cover database. Significant land cover changes were found in Hungary during the 20th century according to the examinations of these maps and statistical databases. The MM5 non-hydrostatic dynamic model was used to further evaluate the meteorological effects of these changes. The lower boundary conditions for this mesoscale model were generated for two selected time periods (for 1900 and 2000) based on the reconstructed maps. The dynamic model has been run with the same detailed meteorological conditions of selected days from 2006 and 2007, but with modified lower boundary conditions. The set of the 26 selected initial conditions represents the whole set of the macrosynoptic situations for Hungary. In this way, 2×26 "forecasts" were made with 48 hours of integration. The effects of land cover changes under different weather situations were further weighted by the long-term (1961-1990) mean frequency of the corresponding macrosynoptic types, to assume the climatic effects from these stratified averages. The detailed evaluation of the model results were made for three different meteorological variables (temperature, dew point and precipitation).

  2. The visual attention saliency map for movie retrospection

    NASA Astrophysics Data System (ADS)

    Rogalska, Anna; Napieralski, Piotr

    2018-04-01

    The visual saliency map is becoming important and challenging for many scientific disciplines (robotic systems, psychophysics, cognitive neuroscience and computer science). Map created by the model indicates possible salient regions by taking into consideration face presence and motion which is essential in motion pictures. By combining we can obtain credible saliency map with a low computational cost.

  3. An imputed forest composition map for New England screened by species range boundaries

    Treesearch

    Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler Wilson

    2015-01-01

    Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...

  4. Predictive Mapping of Forest Attributes on the Fishlake National Forest

    Treesearch

    Tracey S. Frescino; Gretchen G. Moisen

    2005-01-01

    Forest land managers increasingly need maps of forest characteristics to aid in planning and management. A set of 30-m resolution maps was prepared for the Fishlake National Forest by modeling FIA plot variables as nonparametric functions of ancillary digital data. The set includes maps of volume, biomass, growth, stand age, size, crown cover, and various aspen...

  5. Mapping fuels at multiple scales: landscape application of the fuel characteristic classification system.

    Treesearch

    D. McKenzie; C.L. Raymond; L.-K.B. Kellogg; R.A. Norheim; A.G. Andreu; A.C. Bayard; K.E. Kopper; E. Elman

    2007-01-01

    Fuel mapping is a complex and often multidisciplinary process, involving remote sensing, ground-based validation, statistical modeling, and knowledge-based systems. The scale and resolution of fuel mapping depend both on objectives and availability of spatial data layers. We demonstrate use of the Fuel Characteristic Classification System (FCCS) for fuel mapping at two...

  6. Geologic Map and GIS Data for the Tuscarora Geothermal Area

    DOE Data Explorer

    Faulds, James E.

    2013-12-31

    Tuscarora—ESRI Geodatabase (ArcGeology v1.3): - Contains all the geologic map data, including faults, contacts, folds, unit polygons, and attitudes of strata and faults. - List of stratigraphic units and stratigraphic correlation diagram. - Detailed unit descriptions of stratigraphic units. - Five cross‐sections. - Locations of production, injection, and monitor wells. - 3D model constructed with EarthVision using geologic map data, cross‐sections, drill‐hole data, and geophysics (model not in the ESRI geodatabase).

  7. A Time-Aware Routing Map for Indoor Evacuation †

    PubMed Central

    Zhao, Haifeng; Winter, Stephan

    2016-01-01

    Knowledge of dynamic environments expires over time. Thus, using static maps of the environment for decision making is problematic, especially in emergency situations, such as evacuations. This paper suggests a fading memory model for mapping dynamic environments: a mechanism to put less trust on older knowledge in decision making. The model has been assessed by simulating indoor evacuations, adopting and comparing various strategies in decision making. Results suggest that fading memory generally improves this decision making. PMID:26797610

  8. High-Resolution Underwater Mapping Using Side-Scan Sonar

    PubMed Central

    2016-01-01

    The goal of this study is to generate high-resolution sea floor maps using a Side-Scan Sonar(SSS). This is achieved by explicitly taking into account the SSS operation as follows. First, the raw sensor data is corrected by means of a physics-based SSS model. Second, the data is projected to the sea-floor. The errors involved in this projection are thoroughfully analysed. Third, a probabilistic SSS model is defined and used to estimate the probability of each sea-floor region to be observed. This probabilistic information is then used to weight the contribution of each SSS measurement to the map. Because of these models, arbitrary map resolutions can be achieved, even beyond the sensor resolution. Finally, a geometric map building method is presented and combined with the probabilistic approach. The resulting map is composed of two layers. The echo intensity layer holds the most likely echo intensities at each point in the sea-floor. The probabilistic layer contains information about how confident can the user or the higher control layers be about the echo intensity layer data. Experimental results have been conducted in a large subsea region. PMID:26821379

  9. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    PubMed

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  10. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  11. Identifying Greater Sage-Grouse source and sink habitats for conservation planning in an energy development landscape.

    PubMed

    Kirol, Christopher P; Beck, Jeffrey L; Huzurbazar, Snehalata V; Holloran, Matthew J; Miller, Scott N

    2015-06-01

    Conserving a declining species that is facing many threats, including overlap of its habitats with energy extraction activities, depends upon identifying and prioritizing the value of the habitats that remain. In addition, habitat quality is often compromised when source habitats are lost or fragmented due to anthropogenic development. Our objective was to build an ecological model to classify and map habitat quality in terms of source or sink dynamics for Greater Sage-Grouse (Centrocercus urophasianus) in the Atlantic Rim Project Area (ARPA), a developing coalbed natural gas field in south-central Wyoming, USA. We used occurrence and survival modeling to evaluate relationships between environmental and anthropogenic variables at multiple spatial scales and for all female summer life stages, including nesting, brood-rearing, and non-brooding females. For each life stage, we created resource selection functions (RSFs). We weighted the RSFs and combined them to form a female summer occurrence map. We modeled survival also as a function of spatial variables for nest, brood, and adult female summer survival. Our survival-models were mapped as survival probability functions individually and then combined with fixed vital rates in a fitness metric model that, when mapped, predicted habitat productivity (productivity map). Our results demonstrate a suite of environmental and anthropogenic variables at multiple scales that were predictive of occurrence and survival. We created a source-sink map by overlaying our female summer occurrence map and productivity map to predict habitats contributing to population surpluses (source habitats) or deficits (sink habitat) and low-occurrence habitats on the landscape. The source-sink map predicted that of the Sage-Grouse habitat within the ARPA, 30% was primary source, 29% was secondary source, 4% was primary sink, 6% was secondary sink, and 31% was low occurrence. Our results provide evidence that energy development and avoidance of energy infrastructure were probably reducing the amount of source habitat within the ARPA landscape. Our source-sink map provides managers with a means of prioritizing habitats for conservation planning based on source and sink dynamics. The spatial identification of high value (i.e., primary source) as well as suboptimal (i.e., primary sink) habitats allows for informed energy development to minimize effects on local wildlife populations.

  12. Statistical-mechanical analysis of self-organization and pattern formation during the development of visual maps

    NASA Astrophysics Data System (ADS)

    Obermayer, K.; Blasdel, G. G.; Schulten, K.

    1992-05-01

    We report a detailed analytical and numerical model study of pattern formation during the development of visual maps, namely, the formation of topographic maps and orientation and ocular dominance columns in the striate cortex. Pattern formation is described by a stimulus-driven Markovian process, the self-organizing feature map. This algorithm generates topologically correct maps between a space of (visual) input signals and an array of formal ``neurons,'' which in our model represents the cortex. We define order parameters that are a function of the set of visual stimuli an animal perceives, and we demonstrate that the formation of orientation and ocular dominance columns is the result of a global instability of the retinoptic projection above a critical value of these order parameters. We characterize the spatial structure of the emerging patterns by power spectra, correlation functions, and Gabor transforms, and we compare model predictions with experimental data obtained from the striate cortex of the macaque monkey with optical imaging. Above the critical value of the order parameters the model predicts a lateral segregation of the striate cortex into (i) binocular regions with linear changes in orientation preference, where iso-orientation slabs run perpendicular to the ocular dominance bands, and (ii) monocular regions with low orientation specificity, which contain the singularities of the orientation map. Some of these predictions have already been verified by experiments.

  13. XRF map identification problems based on a PDE electrodeposition model

    NASA Astrophysics Data System (ADS)

    Sgura, Ivonne; Bozzini, Benedetto

    2017-04-01

    In this paper we focus on the following map identification problem (MIP): given a morphochemical reaction-diffusion (RD) PDE system modeling an electrodepostion process, we look for a time t *, belonging to the transient dynamics and a set of parameters \\mathbf{p} , such that the PDE solution, for the morphology h≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) and for the chemistry θ ≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) approximates a given experimental map M *. Towards this aim, we introduce a numerical algorithm using singular value decomposition (SVD) and Frobenius norm to give a measure of error distance between experimental maps for h and θ and simulated solutions of the RD-PDE system on a fixed time integration interval. The technique proposed allows quantitative use of microspectroscopy images, such as XRF maps. Specifically, in this work we have modelled the morphology and manganese distributions of nanostructured components of innovative batteries and we have followed their changes resulting from ageing under operating conditions. The availability of quantitative information on space-time evolution of active materials in terms of model parameters will allow dramatic improvements in knowledge-based optimization of battery fabrication and operation.

  14. Mapping rice ecosystem dynamics and greenhouse gas emissions using multiscale imagery and biogeochemical models

    NASA Astrophysics Data System (ADS)

    Salas, W.; Torbick, N.

    2017-12-01

    Rice greenhouse gas (GHG) emissions in production hot spots have been mapped using multiscale satellite imagery and a processed-based biogeochemical model. The multiscale Synthetic Aperture Radar (SAR) and optical imagery were co-processed and fed into a machine leanring framework to map paddy attributes that are tuned using field observations and surveys. Geospatial maps of rice extent, crop calendar, hydroperiod, and cropping intensity were then used to parameterize the DeNitrification-DeComposition (DNDC) model to estimate emissions. Results, in the Red River Detla for example, show total methane emissions at 345.4 million kgCH4-C equivalent to 11.5 million tonnes CO2e (carbon dioxide equivalent). We further assessed the role of Alternative Wetting and Drying and the impact on GHG and yield across production hot spots with uncertainty estimates. The approach described in this research provides a framework for using SAR to derive maps of rice and landscape characteristics to drive process models like DNDC. These types of tools and approaches will support the next generation of Monitoring, Reporting, and Verification (MRV) to combat climate change and support ecosystem service markets.

  15. Testing the PV-Theta Mapping Technique in a 3-D CTM Model Simulation

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.

    2004-01-01

    Mapping lower stratospheric ozone into potential vorticity (PV)- potential temperature (Theta) coordinates is a common technique employed to analyze sparse data sets. Ozone transformed into a flow-following dynamical coordinate system is insensitive to meteorological variations. Therefore data from a wide range of times/locations can be compared, so long as the measurements were made in the same airmass (as defined by PV). Moreover, once a relationship between ozone and PV/Theta is established, a full 3D ozone field can be estimated from this relationship and the 3D analyzed PV field. However, ozone data mapped in this fashion can be hampered by noisy PV fields, or "mis-matches" in the resolution and/or exact location of the ozone and PV measurements. In this study, we investigate the PV-ozone relationship using output from a recent 50-year run of the Goddard 3D chemical transport model (CTM). Model constituents are transported using off-line dynamics from the finite volume general circulation model (FVGCM). By using the internally consistent model PV and ozone fields, we minimize noise due to mis-matching and resolution issues. We calculate correlations between model ozone and PV throughout the stratosphere, and test the sensitivity of the technique to initial data resolution. To do this we degrade the model data to that of various satellite instruments, then compare the mapped fields derived from the sub-sampled data to the full resolution model data. With these studies we can determine appropriate limits for the PV-theta mapping technique in latitude, altitude, and as a function of original data resolution.

  16. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  17. Lidar-revised geologic map of the Wildcat Lake 7.5' quadrangle, Kitsap and Mason Counties, Washington

    USGS Publications Warehouse

    Tabor, Rowland W.; Haugerud, Ralph A.; Haeussler, Peter J.; Clark, Kenneth P.

    2011-01-01

    This map is an interpretation of a 6-ft-resolution (2-m-resolution) lidar (light detection and ranging) digital elevation model combined with the geology depicted on the Geologic Map of the Wildcat Lake 7.5' quadrangle, Kitsap and Mason Counties, Washington (Haeussler and Clark, 2000). Haeussler and Clark described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Wildcat Lake 7.5' quadrangle. This map, derived from 1951 aerial photographs, has 20-ft contours, nominal horizontal resolution of approximately 40 ft (12 m), and nominal mean vertical accuracy of approximately 10 ft (3 m). Similar to many geologic maps, much of the geology in the Haeussler and Clark (2000) map-especially the distribution of surficial deposits-was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound lidar Consortium obtained a lidar-derived digital elevation model (DEM) for Kitsap Peninsula including all of the Wildcat Lake 7.5' quadrangle. This new DEM has a horizontal resolution of 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air photo stereo models have much improved the interpretation of geology in this heavily vegetated landscape, especially the distribution and relative age of some surficial deposits. Many contacts of surficial deposits are adapted unmodified or slightly modified from Haugerud (2009).

  18. Generative Topographic Mapping of Conformational Space.

    PubMed

    Horvath, Dragos; Baskin, Igor; Marcou, Gilles; Varnek, Alexandre

    2017-10-01

    Herein, Generative Topographic Mapping (GTM) was challenged to produce planar projections of the high-dimensional conformational space of complex molecules (the 1LE1 peptide). GTM is a probability-based mapping strategy, and its capacity to support property prediction models serves to objectively assess map quality (in terms of regression statistics). The properties to predict were total, non-bonded and contact energies, surface area and fingerprint darkness. Map building and selection was controlled by a previously introduced evolutionary strategy allowed to choose the best-suited conformational descriptors, options including classical terms and novel atom-centric autocorrellograms. The latter condensate interatomic distance patterns into descriptors of rather low dimensionality, yet precise enough to differentiate between close favorable contacts and atom clashes. A subset of 20 K conformers of the 1LE1 peptide, randomly selected from a pool of 2 M geometries (generated by the S4MPLE tool) was employed for map building and cross-validation of property regression models. The GTM build-up challenge reached robust three-fold cross-validated determination coefficients of Q 2 =0.7…0.8, for all modeled properties. Mapping of the full 2 M conformer set produced intuitive and information-rich property landscapes. Functional and folding subspaces appear as well-separated zones, even though RMSD with respect to the PDB structure was never used as a selection criterion of the maps. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  20. Lidar-revised geologic map of the Des Moines 7.5' quadrangle, King County, Washington

    USGS Publications Warehouse

    Tabor, Rowland W.; Booth, Derek B.

    2017-11-06

    This map is an interpretation of a modern lidar digital elevation model combined with the geology depicted on the Geologic Map of the Des Moines 7.5' Quadrangle, King County, Washington (Booth and Waldron, 2004). Booth and Waldron described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Des Moines 7.5' quadrangle. The base map that they used was originally compiled in 1943 and revised using 1990 aerial photographs; it has 25-ft contours, nominal horizontal resolution of about 40 ft (12 m), and nominal mean vertical accuracy of about 10 ft (3 m). Similar to many geologic maps, much of the geology in the Booth and Waldron (2004) map was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound Lidar Consortium obtained a lidar-derived digital elevation model (DEM) for much of the Puget Sound area, including the entire Des Moines 7.5' quadrangle. This new DEM has a horizontal resolution of about 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air-photo stereo models have much improved the interpretation of geology, even in this heavily developed area, especially the distribution and relative age of some surficial deposits. For a brief description of the light detection and ranging (lidar) remote sensing method and this data acquisition program, see Haugerud and others (2003). 

  1. High-Dimensional Modeling for Cytometry: Building Rock Solid Models Using GemStone™ and Verity Cen-se'™ High-Definition t-SNE Mapping.

    PubMed

    Bruce Bagwell, C

    2018-01-01

    This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.

  2. {sup 18}F-FLT uptake kinetics in head and neck squamous cell carcinoma: A PET imaging study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Dan, E-mail: dan.liu@oncology.ox.ac.uk; Fenwick, John D.; Chalkidou, Anastasia

    2014-04-15

    Purpose: To analyze the kinetics of 3{sup ′}-deoxy-3{sup ′}-[F-18]-fluorothymidine (18F-FLT) uptake by head and neck squamous cell carcinomas and involved nodes imaged using positron emission tomography (PET). Methods: Two- and three-tissue compartment models were fitted to 12 tumor time-activity-curves (TACs) obtained for 6 structures (tumors or involved nodes) imaged in ten dynamic PET studies of 1 h duration, carried out for five patients. The ability of the models to describe the data was assessed using a runs test, the Akaike information criterion (AIC) and leave-one-out cross-validation. To generate parametric maps the models were also fitted to TACs of individual voxels.more » Correlations between maps of different parameters were characterized using Pearson'sr coefficient; in particular the phosphorylation rate-constants k{sub 3-2tiss} and k{sub 5} of the two- and three-tissue models were studied alongside the flux parameters K{sub FLT-2tiss} and K{sub FLT} of these models, and standardized uptake values (SUV). A methodology based on expectation-maximization clustering and the Bayesian information criterion (“EM-BIC clustering”) was used to distil the information from noisy parametric images. Results: Fits of two-tissue models 2C3K and 2C4K and three-tissue models 3C5K and 3C6K comprising three, four, five, and six rate-constants, respectively, pass the runs test for 4, 8, 10, and 11 of 12 tumor TACs. The three-tissue models have lower AIC and cross-validation scores for nine of the 12 tumors. Overall the 3C6K model has the lowest AIC and cross-validation scores and its fitted parameter values are of the same orders of magnitude as literature estimates. Maps ofK{sub FLT} and K{sub FLT-2tiss} are strongly correlated (r = 0.85) and also correlate closely with SUV maps (r = 0.72 for K{sub FLT-2tiss}, 0.64 for K{sub FLT}). Phosphorylation rate-constant maps are moderately correlated with flux maps (r = 0.48 for k{sub 3-2tiss} vs K{sub FLT-2tiss} and r = 0.68 for k{sub 5} vs K{sub FLT}); however, neither phosphorylation rate-constant correlates significantly with SUV. EM-BIC clustering reduces the parametric maps to a small number of levels—on average 5.8, 3.5, 3.4, and 1.4 for K{sub FLT-2tiss}, K{sub FLT}, k{sub 3-2tiss}, and k{sub 5.} This large simplification is potentially useful for radiotherapy dose-painting, but demonstrates the high noise in some maps. Statistical simulations show that voxel level noise degrades TACs generated from the 3C6K model sufficiently that the average AIC score, parameter bias, and total uncertainty of 2C4K model fits are similar to those of 3C6K fits, whereas at the whole tumor level the scores are lower for 3C6K fits. Conclusions: For the patients studied here, whole tumor FLT uptake time-courses are represented better overall by a three-tissue than by a two-tissue model. EM-BIC clustering simplifies noisy parametric maps, providing the best description of the underlying information they contain and is potentially useful for radiotherapy dose-painting. However, the clustering highlights the large degree of noise present in maps of the phosphorylation rate-constantsk{sub 5} and k{sub 3-2tiss}, which are conceptually tightly linked to cellular proliferation. Methods must be found to make these maps more robust—either by constraining other model parameters or modifying dynamic imaging protocols.« less

  3. Incorporation of satellite remote sensing pan-sharpened imagery into digital soil prediction and mapping models to characterize soil property variability in small agricultural fields

    NASA Astrophysics Data System (ADS)

    Xu, Yiming; Smith, Scot E.; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P.

    2017-01-01

    Soil prediction models based on spectral indices from some multispectral images are too coarse to characterize spatial pattern of soil properties in small and heterogeneous agricultural lands. Image pan-sharpening has seldom been utilized in Digital Soil Mapping research before. This research aimed to analyze the effects of pan-sharpened (PAN) remote sensing spectral indices on soil prediction models in smallholder farm settings. This research fused the panchromatic band and multispectral (MS) bands of WorldView-2, GeoEye-1, and Landsat 8 images in a village in Southern India by Brovey, Gram-Schmidt and Intensity-Hue-Saturation methods. Random Forest was utilized to develop soil total nitrogen (TN) and soil exchangeable potassium (Kex) prediction models by incorporating multiple spectral indices from the PAN and MS images. Overall, our results showed that PAN remote sensing spectral indices have similar spectral characteristics with soil TN and Kex as MS remote sensing spectral indices. There is no soil prediction model incorporating the specific type of pan-sharpened spectral indices always had the strongest prediction capability of soil TN and Kex. The incorporation of pan-sharpened remote sensing spectral data not only increased the spatial resolution of the soil prediction maps, but also enhanced the prediction accuracy of soil prediction models. Small farms with limited footprint, fragmented ownership and diverse crop cycle should benefit greatly from the pan-sharpened high spatial resolution imagery for soil property mapping. Our results show that multiple high and medium resolution images can be used to map soil properties suggesting the possibility of an improvement in the maps' update frequency. Additionally, the results should benefit the large agricultural community through the reduction of routine soil sampling cost and improved prediction accuracy.

  4. Multi-Scale Mapping of Vegetation Biomass

    NASA Astrophysics Data System (ADS)

    Hudak, A. T.; Fekety, P.; Falkowski, M. J.; Kennedy, R. E.; Crookston, N.; Smith, A. M.; Mahoney, P.; Glenn, N. F.; Dong, J.; Kane, V. R.; Woodall, C. W.

    2016-12-01

    Vegetation biomass mapping at multiple scales is important for carbon inventory and monitoring, reporting, and verification (MRV). Project-level lidar collections allow biomass estimation with high confidence where associated with field plot measurements. Predictive models developed from such datasets are customarily used to generate landscape-scale biomass maps. We tested the feasibility of predicting biomass in landscapes surveyed with lidar but without field plots, by withholding plot datasets from a reduced model applied to the landscapes, and found support for a generalized model in the northern Idaho ecoregion. We are also upscaling a generalized model to all forested lands in Idaho. Our regional modeling approach is to sample the 30-m biomass predictions from the landscape-scale maps and use them to train a regional biomass model, using Landsat time series, topographic derivatives, and climate variables as predictors. Our regional map validation approach is to aggregate the regional, annual biomass predictions to the county level and compare them to annual county-level biomass summarized independently from systematic, field-based, annual inventories conducted by the US Forest Inventory and Analysis (FIA) Program nationally. A national-scale forest cover map generated independently from 2010 PALSAR data at 25-m resolution is being used to mask non-forest pixels from the aggregations. Effects of climate change on future regional biomass stores are also being explored, using biomass estimates projected from stand-level inventory data collected in the National Forests and comparing them to FIA plot data collected independently on public and private lands, projected under the same climate change scenarios, with disturbance trends extracted from the Landsat time series. Our ultimate goal is to demonstrate, focusing on the ecologically diverse Northwest region of the USA, a carbon monitoring system (CMS) that is accurate, objective, repeatable, and transparent.

  5. Isoscapes of tree-ring carbon-13 perform like meteorological networks in predicting regional precipitation patterns

    NASA Astrophysics Data System (ADS)

    del Castillo, Jorge; Aguilera, Mònica; Voltas, Jordi; Ferrio, Juan Pedro

    2013-03-01

    isotopes in tree rings provide climatic information with annual resolution dating back for centuries or even millennia. However, deriving spatially explicit climate models from isotope networks remains challenging. Here we propose a methodology to model regional precipitation from carbon isotope discrimination (Δ13C) in tree rings by (1) building regional spatial models of Δ13C (isoscapes) and (2) deriving precipitation maps from Δ13C-isoscapes, taking advantage of the response of Δ13C to precipitation in seasonally dry climates. As a case study, we modeled the spatial distribution of mean annual precipitation (MAP) in the northeastern Iberian Peninsula, a region with complex topography and climate (MAP = 303-1086 mm). We compiled wood Δ13C data for two Mediterranean species that exhibit complementary responses to seasonal precipitation (Pinus halepensis Mill., N = 38; Quercus ilex L.; N = 44; pooling period: 1975-2008). By combining multiple regression and geostatistical interpolation, we generated one Δ13 C-isoscape for each species. A spatial model of MAP was then built as the sum of two complementary maps of seasonal precipitation, each one derived from the corresponding Δ13C-isoscape (September-November from Q. ilex; December-August from P. halepensis). Our approach showed a predictive power for MAP (RMSE = 84 mm) nearly identical to that obtained by interpolating data directly from a similarly dense network of meteorological stations (RMSE = 80-83 mm, N = 65), being only outperformed when using a much denser meteorological network (RMSE = 56-57 mm, N = 340). This method offers new avenues for modeling spatial variability of past precipitation, exploiting the large amount of information currently available from tree-ring networks.

  6. Geospatial Predictive Modelling for Climate Mapping of Selected Severe Weather Phenomena Over Poland: A Methodological Approach

    NASA Astrophysics Data System (ADS)

    Walawender, Ewelina; Walawender, Jakub P.; Ustrnul, Zbigniew

    2017-02-01

    The main purpose of the study is to introduce methods for mapping the spatial distribution of the occurrence of selected atmospheric phenomena (thunderstorms, fog, glaze and rime) over Poland from 1966 to 2010 (45 years). Limited in situ observations as well the discontinuous and location-dependent nature of these phenomena make traditional interpolation inappropriate. Spatially continuous maps were created with the use of geospatial predictive modelling techniques. For each given phenomenon, an algorithm identifying its favourable meteorological and environmental conditions was created on the basis of observations recorded at 61 weather stations in Poland. Annual frequency maps presenting the probability of a day with a thunderstorm, fog, glaze or rime were created with the use of a modelled, gridded dataset by implementing predefined algorithms. Relevant explanatory variables were derived from NCEP/NCAR reanalysis and downscaled with the use of a Regional Climate Model. The resulting maps of favourable meteorological conditions were found to be valuable and representative on the country scale but at different correlation ( r) strength against in situ data (from r = 0.84 for thunderstorms to r = 0.15 for fog). A weak correlation between gridded estimates of fog occurrence and observations data indicated the very local nature of this phenomenon. For this reason, additional environmental predictors of fog occurrence were also examined. Topographic parameters derived from the SRTM elevation model and reclassified CORINE Land Cover data were used as the external, explanatory variables for the multiple linear regression kriging used to obtain the final map. The regression model explained 89 % of annual frequency of fog variability in the study area. Regression residuals were interpolated via simple kriging.

  7. Tools for Model Building and Optimization into Near-Atomic Resolution Electron Cryo-Microscopy Density Maps.

    PubMed

    DiMaio, F; Chiu, W

    2016-01-01

    Electron cryo-microscopy (cryoEM) has advanced dramatically to become a viable tool for high-resolution structural biology research. The ultimate outcome of a cryoEM study is an atomic model of a macromolecule or its complex with interacting partners. This chapter describes a variety of algorithms and software to build a de novo model based on the cryoEM 3D density map, to optimize the model with the best stereochemistry restraints and finally to validate the model with proper protocols. The full process of atomic structure determination from a cryoEM map is described. The tools outlined in this chapter should prove extremely valuable in revealing atomic interactions guided by cryoEM data. © 2016 Elsevier Inc. All rights reserved.

  8. Object detection system based on multimodel saliency maps

    NASA Astrophysics Data System (ADS)

    Guo, Ya'nan; Luo, Chongfan; Ma, Yide

    2017-03-01

    Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn's neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F-measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation EHA reaches to 215.287. We deem our method can be wielded to diverse applications in the future.

  9. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  10. CryoSat-2 altimetry derived Arctic bathymetry map: first results and validation

    NASA Astrophysics Data System (ADS)

    Andersen, O. B.; Abulaitijiang, A.; Cancet, M.; Knudsen, P.

    2017-12-01

    The Technical University of Denmark (DTU), DTU Space has been developing high quality high resolution gravity fields including the new highly accurate CryoSat-2 radar altimetry satellite data which extends the global coverage of altimetry data up to latitude 88°. With its exceptional Synthetic Aperture Radar (SAR) mode being operating throughout the Arctic Ocean, leads, i.e., the ocean surface heights, is used to retrieve the sea surface height with centimeter-level range precision. Combined with the long repeat cycle ( 369 days), i.e., dense cross-track coverage, the high-resolution Arctic marine gravity can be modelled using the CryoSat-2 altimetry. Further, the polar gap can be filled by the available ArcGP product, thus yielding the complete map of the Arctic bathymetry map. In this presentation, we will make use of the most recent DTU17 marine gravity, to derive the arctic bathymetry map using inversion based on best available hydrographic maps. Through the support of ESA a recent evaluation of existing hydrographic models of the Arctic Ocean Bathymetry models (RTOPO, GEBCO, IBCAO etc) and various inconsistencies have been identified and means to rectify these inconsistencies have been taken prior to perform the inversion using altimetry. Simultaneously DTU Space has been placing great effort on the Arctic data screening, filtering, and de-noising using various altimetry retracking solutions and classifications. All the pre-processing contributed to the fine modelling of Actic gravity map. Thereafter, the arctic marine gravity grids will eventually be translated (downward continuation operation) to a new altimetry enhanced Arctic bathymetry map using appropriate band-pass filtering.

  11. Mapping soil textural fractions across a large watershed in north-east Florida.

    PubMed

    Lamsal, S; Mishra, U

    2010-08-01

    Assessment of regional scale soil spatial variation and mapping their distribution is constrained by sparse data which are collected using field surveys that are labor intensive and cost prohibitive. We explored geostatistical (ordinary kriging-OK), regression (Regression Tree-RT), and hybrid methods (RT plus residual Sequential Gaussian Simulation-SGS) to map soil textural fractions across the Santa Fe River Watershed (3585 km(2)) in north-east Florida. Soil samples collected from four depths (L1: 0-30 cm, L2: 30-60 cm, L3: 60-120 cm, and L4: 120-180 cm) at 141 locations were analyzed for soil textural fractions (sand, silt and clay contents), and combined with textural data (15 profiles) assembled under the Florida Soil Characterization program. Textural fractions in L1 and L2 were autocorrelated, and spatially mapped across the watershed. OK performance was poor, which may be attributed to the sparse sampling. RT model structure varied among textural fractions, and the model explained variations ranged from 25% for L1 silt to 61% for L2 clay content. Regression residuals were simulated using SGS, and the average of simulated residuals were used to approximate regression residual distribution map, which were added to regression trend maps. Independent validation of the prediction maps showed that regression models performed slightly better than OK, and regression combined with average of simulated regression residuals improved predictions beyond the regression model. Sand content >90% in both 0-30 and 30-60 cm covered 80.6% of the watershed area. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Similarity-transformed dyson mapping and SDG-interacting boson hamiltonian

    NASA Astrophysics Data System (ADS)

    Navrátil, P.; Dobeš, J.

    1991-10-01

    The sdg-interacting boson hamiltonian is constructed from the fermion shell-model input. The seniority boson mapping as given by the similarity-transformed Dyson boson mapping is used. The s, d, and g collective boson amplitudes are determined consistently from the mapped hamiltonian. Influence of the starting shell-model parameters is discussed. Calculations for the Sm isotopic chain and for the 148Sm, 150Nd, and 196Pt nuclei are presented. Calculated energy levels as well as E2 and E4 properties agree rather well with experimental ones. To obtain such agreement, the input shell-model parameters cannot be fixed at a constant set for several nuclei but have to be somewhat varied, especially in the deformed region. Possible reasons for this variation are discussed. Effects of the explicit g-boson consideration are shown.

  13. Meltwater channel scars and the extent of Mid-Pleistocene glaciation in central Pennsylvania

    NASA Astrophysics Data System (ADS)

    Marsh, Ben

    2017-10-01

    High-resolution digital topographic data permit morphological analyses of glacial processes in detail that was previously infeasible. High-level glaciofluvial erosional scars in central Pennsylvania, identified and delimited using LiDAR data, define the approximate ice depth during a pre-Wisconsin advance, > 770,000 BP, on a landscape unaffected by Wisconsin glaciation. Distinctive scars on the prows of anticlinal ridges at 175-350 m above the valley floor locate the levels of subice meltwater channels. A two-component planar GIS model of the ice surface is derived using these features and intersected with a digital model of contemporary topography to create a glacial limit map. The map is compared to published maps, demonstrating the limits of conventional sediment-based mapping. Additional distinctive meltwater features that were cut during deglaciation are modeled in a similar fashion.

  14. Hyperspectral remote sensing of vegetation

    USGS Publications Warehouse

    Thenkabail, Prasad S.; Lyon, John G.; Huete, Alfredo

    2011-01-01

    Hyperspectral narrow-band (or imaging spectroscopy) spectral data are fast emerging as practical solutions in modeling and mapping vegetation. Recent research has demonstrated the advances in and merit of hyperspectral data in a range of applications including quantifying agricultural crops, modeling forest canopy biochemical properties, detecting crop stress and disease, mapping leaf chlorophyll content as it influences crop production, identifying plants affected by contaminants such as arsenic, demonstrating sensitivity to plant nitrogen content, classifying vegetation species and type, characterizing wetlands, and mapping invasive species. The need for significant improvements in quantifying, modeling, and mapping plant chemical, physical, and water properties is more critical than ever before to reduce uncertainties in our understanding of the Earth and to better sustain it. There is also a need for a synthesis of the vast knowledge spread throughout the literature from more than 40 years of research.

  15. Application of GIS Rapid Mapping Technology in Disaster Monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Tu, J.; Liu, G.; Zhao, Q.

    2018-04-01

    With the rapid development of GIS and RS technology, especially in recent years, GIS technology and its software functions have been increasingly mature and enhanced. And with the rapid development of mathematical statistical tools for spatial modeling and simulation, has promoted the widespread application and popularization of quantization in the field of geology. Based on the investigation of field disaster and the construction of spatial database, this paper uses remote sensing image, DEM and GIS technology to obtain the data information of disaster vulnerability analysis, and makes use of the information model to carry out disaster risk assessment mapping.Using ArcGIS software and its spatial data modeling method, the basic data information of the disaster risk mapping process was acquired and processed, and the spatial data simulation tool was used to map the disaster rapidly.

  16. SBKF Modeling and Analysis Plan: Buckling Analysis of Compression-Loaded Orthogrid and Isogrid Cylinders

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Hilburger, Mark W.

    2013-01-01

    This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.

  17. Soil property maps of Africa at 250 m resolution

    NASA Astrophysics Data System (ADS)

    Kempen, Bas; Hengl, Tomislav; Heuvelink, Gerard B. M.; Leenaars, Johan G. B.; Walsh, Markus G.; MacMillan, Robert A.; Mendes de Jesus, Jorge S.; Shepherd, Keith; Sila, Andrew; Desta, Lulseged T.; Tondoh, Jérôme E.

    2015-04-01

    Vast areas of arable land in sub-Saharan Africa suffer from low soil fertility and physical soil constraints, and significant amounts of nutrients are lost yearly due to unsustainable soil management practices. At the same time it is expected that agriculture in Africa must intensify to meet the growing demand for food and fiber the next decades. Protection and sustainable management of Africa's soil resources is crucial to achieve this. In this context, comprehensive, accurate and up-to-date soil information is an essential input to any agricultural or environmental management or policy and decision-making model. In Africa, detailed soil information has been fragmented and limited to specific zones of interest for decades. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. AfSIS builds on recent advances in digital soil mapping, infrared spectroscopy, remote sensing, (geo)statistics, and integrated soil fertility management to improve the way soils are evaluated, mapped, and monitored. Over the period 2008-2014, the AfSIS project has compiled two soil profile data sets (about 28,000 unique locations): the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site (new soil samples) database -- the two data sets represent the most comprehensive soil sample database of the African continent to date. In addition a large set of high-resolution environmental data layers (covariates) was assembled. The point data were used in the AfSIS project to generate a set of maps of key soil properties for the African continent at 250 m spatial resolution: sand, silt and clay fractions, bulk density, organic carbon, total nitrogen, pH, cation-exchange capacity, exchangeable bases (Ca, K, Mg, Na), exchangeable acidity, and Al content. These properties were mapped for six depth intervals up to 2 m: 0-5 cm, 5-15 cm, 15-30 cm, 30-60 cm, 60-100 cm, and 100-200 cm. Random forests modelling was used to relate the soil profile observations to a set covariates, that included global soil class and property maps, MODIS imagery and a DEM, in a 3D mapping framework. The model residuals were interpolated by 3D kriging, after which the kriging predictions were added to the random forests predictions to obtain the soil property predictions. The model predictions were validated with 5-fold cross-validation. The random forests models explained between 37% (exch. Na) and 85% (Al content) of the variation in the data. Results also show that globally predicted soil classes help improve continental scale mapping of the soil nutrients and are often among the most important predictors. We conclude that the first mapping results look promising. We used an automated modelling framework that enables re-computing the maps as new data becomes arrives, hereby gradually improving the maps. We showed that global maps of soil classes and properties produced with models that were predominantly calibrated on areas with plentiful observations can be used to improve the accuracy of predictions in regions with less plentiful data, such as Africa.

  18. D Model Visualization Enhancements in Real-Time Game Engines

    NASA Astrophysics Data System (ADS)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including DirectX 11 support. Real-time tessellation is a technique that can be applied by using such technology. Since the displacement and the resulting geometry are calculated by the GPU, the time-based execution cost of this technique is very low.

  19. A Bayesian-Based Novel Methodology to Generate Reliable Site Response Mapping Sensitive to Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Chakraborty, A.; Goto, H.

    2017-12-01

    The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.

  20. Using a detailed uncertainty analysis to adjust mapped rates of forest disturbance derived from Landsat time series data (Invited)

    NASA Astrophysics Data System (ADS)

    Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.

    2013-12-01

    Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.

Top