Sample records for samplex automatic mapping

  1. Solar cycle dynamics of solar, magnetospheric, and heliospheric particles, and long-term atmospheric coupling: SAMPLEX

    NASA Technical Reports Server (NTRS)

    Mason, G. M. (Principal Investigator); Hamilton, D. C.; Blake, J. B.; Mewaldt, R. A.; Stone, E. C.; Baker, D. N.; VonRosenvinge, T. T.; Callis, L. B.; Klecker, B.; Hovestadt, D.; hide

    1996-01-01

    This report summarizes science analysis activities by the SAMPEX mission science team during the period during the period July 1, 1995 through July 1, 1996. Bibliographic entries for 1995 and 1996 to date (July 1996) are included. The SAMPEX science team was extremely active, with 20 articles published or submitted to refereed journals, 18 papers published in their entirety in Conference Proceedings, and 53 contributed papers, seminars, and miscellaneous presentations. The bibliography at the end of this report constitutes the primary description of the research activity. Science highlights are given under the major activity headings of anomalous cosmic rays, solar energetic particles, magnetospheric precipitating electrons, trapped H and He isotopes, and data analysis activities.

  2. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    NASA Astrophysics Data System (ADS)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  3. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society

  4. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  5. Automatic crown cover mapping to improve forest inventory

    Treesearch

    Claude Vidal; Jean-Guy Boureau; Nicolas Robert; Nicolas Py; Josiane Zerubia; Xavier Descombes; Guillaume Perrin

    2009-01-01

    To automatically analyze near infrared aerial photographs, the French National Institute for Research in Computer Science and Control developed together with the French National Forest Inventory (NFI) a method for automatic crown cover mapping. This method uses a Reverse Jump Monte Carlo Markov Chain algorithm to locate the crowns and describe those using ellipses or...

  6. Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning

    NASA Astrophysics Data System (ADS)

    Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun

    2016-04-01

    This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.

  7. Initial Experience With Ultra High-Density Mapping of Human Right Atria.

    PubMed

    Bollmann, Andreas; Hilbert, Sebastian; John, Silke; Kosiuk, Jedrzej; Hindricks, Gerhard

    2016-02-01

    Recently, an automatic, high-resolution mapping system has been presented to accurately and quickly identify right atrial geometry and activation patterns in animals, but human data are lacking. This study aims to assess the clinical feasibility and accuracy of high-density electroanatomical mapping of various RA arrhythmias. Electroanatomical maps of the RA (35 partial and 24 complete) were created in 23 patients using a novel mini-basket catheter with 64 electrodes and automatic electrogram annotation. Median acquisition time was 6:43 minutes (0:39-23:05 minutes) with shorter times for partial (4.03 ± 4.13 minutes) than for complete maps (9.41 ± 4.92 minutes). During mapping 3,236 (710-16,306) data points were automatically annotated without manual correction. Maps obtained during sinus rhythm created geometry consistent with CT imaging and demonstrated activation originating at the middle to superior crista terminalis, while maps during CS pacing showed right atrial activation beginning at the infero-septal region. Activation patterns were consistent with cavotricuspid isthmus-dependent atrial flutter (n = 4), complex reentry tachycardia (n = 1), or ectopic atrial tachycardia (n = 2). His bundle and fractionated potentials in the slow pathway region were automatically detected in all patients. Ablation of the cavotricuspid isthmus (n = 9), the atrio-ventricular node (n = 2), atrial ectopy (n = 2), and the slow pathway (n = 3) was successfully and safely performed. RA mapping with this automatic high-density mapping system is fast, feasible, and safe. It is possible to reproducibly identify propagation of atrial activation during sinus rhythm, various tachycardias, and also complex reentrant arrhythmias. © 2015 Wiley Periodicals, Inc.

  8. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use expert knowledge for only model building and constraining, but also in the phase of landscape classification.

  9. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  10. Tools for model-building with cryo-EM maps

    DOE PAGES

    Terwilliger, Thomas Charles

    2018-01-01

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  11. Tools for model-building with cryo-EM maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas Charles

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  12. Automatic and Controlled Response Inhibition: Associative Learning in the Go/No-Go and Stop-Signal Paradigms

    PubMed Central

    Verbruggen, Frederick; Logan, Gordon D.

    2008-01-01

    In five experiments, the authors examined the development of automatic response inhibition in the go/no-go paradigm and a modified version of the stop-signal paradigm. They hypothesized that automatic response inhibition may develop over practice when stimuli are consistently associated with stopping. All five experiments consisted of a training phase and a test phase in which the stimulus mapping was reversed for a subset of the stimuli. Consistent with the automatic-inhibition hypothesis, the authors found that responding in the test phase was slowed when the stimulus had been consistently associated with stopping in the training phase. In addition, they found that response inhibition benefited from consistent stimulus-stop associations. These findings suggest that response inhibition may rely on the retrieval of stimulus-stop associations after practice with consistent stimulus-stop mappings. Stimulus-stop mapping is typically consistent in the go/no-go paradigm, so automatic inhibition is likely to occur. However, stimulus-stop mapping is typically inconsistent in the stop-signal paradigm, so automatic inhibition is unlikely to occur. Thus, the results suggest that the two paradigms are not equivalent because they allow different kinds of response inhibition. PMID:18999358

  13. Automatic Scaffolding and Measurement of Concept Mapping for EFL Students to Write Summaries

    ERIC Educational Resources Information Center

    Yang, Yu-Fen

    2015-01-01

    An incorrect concept map may obstruct a student's comprehension when writing summaries if they are unable to grasp key concepts when reading texts. The purpose of this study was to investigate the effects of automatic scaffolding and measurement of three-layer concept maps on improving university students' writing summaries. The automatic…

  14. Automatic photointerpretation for land use management in Minnesota

    NASA Technical Reports Server (NTRS)

    Swanlund, G. D. (Principal Investigator); Kirvida, L.; Cheung, M.; Pile, D.; Zirkle, R.

    1974-01-01

    The author has identified the following significant results. Automatic photointerpretation techniques were utilized to evaluate the feasibility of data for land use management. It was shown that ERTS-1 MSS data can produce thematic maps of adequate resolution and accuracy to update land use maps. In particular, five typical land use areas were mapped with classification accuracies ranging from 77% to over 90%.

  15. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  16. Automatic categorization of land-water cover types of the Green Swamp, Florida, using Skylab multispectral scanner (S-192) data

    NASA Technical Reports Server (NTRS)

    Coker, A. E.; Higer, A. L.; Rogers, R. H.; Shah, N. J.; Reed, L. E.; Walker, S.

    1975-01-01

    The techniques used and the results achieved in the successful application of Skylab Multispectral Scanner (EREP S-192) high-density digital tape data for the automatic categorizing and mapping of land-water cover types in the Green Swamp of Florida were summarized. Data was provided from Skylab pass number 10 on 13 June 1973. Significant results achieved included the automatic mapping of a nine-category and a three-category land-water cover map of the Green Swamp. The land-water cover map was used to make interpretations of a hydrologic condition in the Green Swamp. This type of use marks a significant breakthrough in the processing and utilization of EREP S-192 data.

  17. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  18. Accelerometer-based automatic voice onset detection in speech mapping with navigated repetitive transcranial magnetic stimulation.

    PubMed

    Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P

    2015-09-30

    The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  20. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Taxonomic classification of soils using digital information from LANDSAT data. Huayllamarca and eucaliptus areas. M.S. Thesis - Bolivia Univ.

    NASA Technical Reports Server (NTRS)

    Quiroga, S. Q.

    1977-01-01

    The applicability of LANDSAT digital information to soil mapping is described. A compilation of all cartographic information and bibliography of the study area is made. LANDSAT MSS images on a scale of 1:250,000 are interpreted and a physiographic map with legend is prepared. The study area is inspected and a selection of the sample areas is made. A digital map of the different soil units is produced and the computer mapping units are checked against the soil units encountered in the field. The soil boundaries obtained by automatic mapping were not substantially changed by field work. The accuracy of the automatic mapping is rather high.

  2. Architecture for Cyber Defense Simulator in Military Applications

    DTIC Science & Technology

    2013-06-01

    ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5977648&isnumber=5977431 [18] Goodall , J.R.; D’Amico, A.; Kopylec, J.K.; , "Camus: Automatically mapping...isnumber=5977431 • [18] Goodall , J.R.; D’Amico, A.; Kopylec, J.K.; , "Camus: Automatically mapping Cyber Assets to Missions and Users," Military

  3. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  4. A State-of-the-Art Assessment of Automatic Name Placement.

    DTIC Science & Technology

    1986-08-01

    develop an automatic name placement system. 11 Balodis, M., "Positioning of typography on maps," Proc. ACSM Pall Con- vention, Salt Lake City, Utah, Sept...1983, pp. 28-44. This article deals with the selection of typography for maps. It describes psycho-visual experiments with groups of individuals to...Polytechnic Institute, Troy, NY 12181, May 1984. (Also available as Tech. Rept. IPL-TR-063.) SBalodis, M., "Positioning of typography on maps," Proc

  5. Automatic Collision Avoidance Technology (ACAT)

    NASA Technical Reports Server (NTRS)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  6. Automatic map generalisation from research to production

    NASA Astrophysics Data System (ADS)

    Nyberg, Rose; Johansson, Mikael; Zhang, Yang

    2018-05-01

    The manual work of map generalisation is known to be a complex and time consuming task. With the development of technology and societies, the demands for more flexible map products with higher quality are growing. The Swedish mapping, cadastral and land registration authority Lantmäteriet has manual production lines for databases in five different scales, 1 : 10 000 (SE10), 1 : 50 000 (SE50), 1 : 100 000 (SE100), 1 : 250 000 (SE250) and 1 : 1 million (SE1M). To streamline this work, Lantmäteriet started a project to automatically generalise geographic information. Planned timespan for the project is 2015-2022. Below the project background together with the methods for the automatic generalisation are described. The paper is completed with a description of results and conclusions.

  7. ActionMap: A web-based software that automates loci assignments to framework maps.

    PubMed

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  8. ActionMap: a web-based software that automates loci assignments to framework maps

    PubMed Central

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-01-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426

  9. Elementary maps on nest algebras

    NASA Astrophysics Data System (ADS)

    Li, Pengtong

    2006-08-01

    Let , be algebras and let , be maps. An elementary map of is an ordered pair (M,M*) such that for all , . In this paper, the general form of surjective elementary maps on standard subalgebras of nest algebras is described. In particular, such maps are automatically additive.

  10. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  11. An interoperable standard system for the automatic generation and publication of the fire risk maps based on Fire Weather Index (FWI)

    NASA Astrophysics Data System (ADS)

    Julià Selvas, Núria; Ninyerola Casals, Miquel

    2015-04-01

    It has been implemented an automatic system to predict the fire risk in the Principality of Andorra, a small country located in the eastern Pyrenees mountain range, bordered by Catalonia and France, due to its location, his landscape is a set of a rugged mountains with an average elevation around 2000 meters. The system is based on the Fire Weather Index (FWI) that consists on different components, each one, measuring a different aspect of the fire danger calculated by the values of the weather variables at midday. CENMA (Centre d'Estudis de la Neu i de la Muntanya d'Andorra) has a network around 10 automatic meteorological stations, located in different places, peeks and valleys, that measure weather data like relative humidity, wind direction and speed, surface temperature, rainfall and snow cover every ten minutes; this data is sent daily and automatically to the system implemented that will be processed in the way to filter incorrect measurements and to homogenizer measurement units. Then this data is used to calculate all components of the FWI at midday and for the level of each station, creating a database with the values of the homogeneous measurements and the FWI components for each weather station. In order to extend and model this data to all Andorran territory and to obtain a continuous map, an interpolation method based on a multiple regression with spline residual interpolation has been implemented. This interpolation considerer the FWI data as well as other relevant predictors such as latitude, altitude, global solar radiation and sea distance. The obtained values (maps) are validated using a cross-validation leave-one-out method. The discrete and continuous maps are rendered in tiled raster maps and published in a web portal conform to Web Map Service (WMS) Open Geospatial Consortium (OGC) standard. Metadata and other reference maps (fuel maps, topographic maps, etc) are also available from this geoportal.

  12. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  13. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  14. Automatic Boosted Flood Mapping from Satellite Data

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence

    2016-01-01

    Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.

  15. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    NASA Astrophysics Data System (ADS)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.

  16. Microprocessor-controlled hemodynamics: a step towards improved efficiency and safety.

    PubMed

    Keogh, B E; Jacobs, J; Royston, D; Taylor, K M

    1989-02-01

    Manual titration of sodium nitroprusside (SNP) is widely used for treatment of hypertension following cardiac surgery. This study compared conventional manual control with control by a research prototype of an automatic infusion module based on a proportional plus integral plus derivative (PID) negative feedback loop. Two groups of coronary artery bypass patients requiring SNP for postoperative hypertension were studied prospectively. In the first group, hypertension was controlled by manual adjustment of the SNP infusion rate, and in the second, the infusion rate was controlled automatically. The actual and desired mean arterial pressures (MAP) over consecutive ten-second epochs were recorded during the period of infusion. The MAP was maintained within 10% of the desired MAP 45.8% of the time in the manual group, compared with 90.0% in the automatic group, and the mean percent error in the automatic group was significantly less than in the manual group (P less than 0.01). It is concluded that adoption of such systems will result in improved patient safety and may facilitate more effective distribution of nursing staff within intensive care units.

  17. Generating Impact Maps from Automatically Detected Bomb Craters in Aerial Wartime Images Using Marked Point Processes

    NASA Astrophysics Data System (ADS)

    Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian

    2018-04-01

    The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.

  18. Conversion of KEGG metabolic pathways to SBGN maps including automatic layout

    PubMed Central

    2013-01-01

    Background Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non‐trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Results Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. Conclusions This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result. PMID:23953132

  19. Decision-level fusion of SAR and IR sensor information for automatic target detection

    NASA Astrophysics Data System (ADS)

    Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon

    2017-05-01

    We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.

  20. Leveraging terminological resources for mapping between rare disease information sources.

    PubMed

    Rance, Bastien; Snyder, Michelle; Lewis, Janine; Bodenreider, Olivier

    2013-01-01

    Rare disease information sources are incompletely and inconsistently cross-referenced to one another, making it difficult for information seekers to navigate across them. The development of such cross-references established manually by experts is generally labor intensive and costly. To develop an automatic mapping between two of the major rare diseases information sources, GARD and Orphanet, by leveraging terminological resources, especially the UMLS. We map the rare disease terms from Orphanet and ORDR to the UMLS. We use the UMLS as a pivot to bridge between the rare disease terminologies. We compare our results to a mapping obtained through manually established cross-references to OMIM. Our mapping has a precision of 94%, a recall of 63% and an F1-score of 76%. Our automatic mapping should help facilitate the development of more complete and consistent cross-references between GARD and Orphanet, and is applicable to other rare disease information sources as well.

  1. Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback

    NASA Astrophysics Data System (ADS)

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2013-08-01

    The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.

  2. R-on-1 automatic mapping: A new tool for laser damage testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hue, J.; Garrec, P.; Dijon, J.

    1996-12-31

    Laser damage threshold measurement is statistical in nature. For a commercial qualification or for a user, the threshold determined by the weakest point is a satisfactory characterization. When a new coating is designed, threshold mapping is very useful. It enables the technology to be improved and followed more accurately. Different statistical parameters such as the minimum, maximum, average, and standard deviation of the damage threshold as well as spatial parameters such as the threshold uniformity of the coating can be determined. Therefore, in order to achieve a mapping, all the tested sites should give data. This is the major interestmore » of the R-on-1 test in spite of the fact that the laser damage threshold obtained by this method may be different from the 1-on-1 test (smaller or greater). Moreover, on the damage laser test facility, the beam size is smaller (diameters of a few hundred micrometers) than the characteristic sizes of the components in use (diameters of several centimeters up to one meter). Hence, a laser damage threshold mapping appears very interesting, especially for applications linked to large optical components like the Megajoule project or the National Ignition Facility (N.I.F). On the test bench used, damage detection with a Nomarski microscope and scattered light measurement are almost equivalent. Therefore, it becomes possible to automatically detect on line the first defects induced by YAG irradiation. Scattered light mappings and laser damage threshold mappings can therefore be achieved using a X-Y automatic stage (where the test sample is located). The major difficulties due to the automatic capabilities are shown. These characterizations are illustrated at 355 nm. The numerous experiments performed show different kinds of scattering curves, which are discussed in relation with the damage mechanisms.« less

  3. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.

  4. Development of Generation System of Simplified Digital Maps

    NASA Astrophysics Data System (ADS)

    Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng

    In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.

  5. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  6. Automatic Evolution of Molecular Nanotechnology Designs

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.

  7. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  8. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  9. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  10. A new method for automatic discontinuity traces sampling on rock mass 3D model

    NASA Astrophysics Data System (ADS)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  11. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  12. Wildland resource information system: user's guide

    Treesearch

    Robert M. Russell; David A. Sharpnack; Elliot L. Amidon

    1975-01-01

    This user's guide provides detailed information about how to use the computer programs of WRIS, a computer system for storing and manipulating data about land areas. Instructions explain how to prepare maps, digitize by automatic scanners or by hand, produce polygon maps, and combine map layers. Support programs plot maps, store them on tapes, produce summaries,...

  13. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  15. Leveraging Terminological Resources for Mapping between Rare Disease Information Sources

    PubMed Central

    Rance, Bastien; Snyder, Michelle; Lewis, Janine; Bodenreider, Olivier

    2015-01-01

    Background Rare disease information sources are incompletely and inconsistently cross-referenced to one another, making it difficult for information seekers to navigate across them. The development of such cross-references established manually by experts is generally labor intensive and costly. Objectives To develop an automatic mapping between two of the major rare diseases information sources, GARD and Orphanet, by leveraging terminological resources, especially the UMLS. Methods We map the rare disease terms from Orphanet and ORDR to the UMLS. We use the UMLS as a pivot to bridge between the rare disease terminologies. We compare our results to a mapping obtained through manually established cross-references to OMIM. Results Our mapping has a precision of 94%, a recall of 63% and an F1-score of 76%. Our automatic mapping should help facilitate the development of more complete and consistent cross-references between GARD and Orphanet, and is applicable to other rare disease information sources as well. PMID:23920611

  16. Enhancing the usability and performance of structured association mapping algorithms using automation, parallelization, and visualization in the GenAMap software system

    PubMed Central

    2012-01-01

    Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660

  17. Eye-tracking for clinical decision support: A method to capture automatically what physicians are viewing in the EMR.

    PubMed

    King, Andrew J; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F

    2017-01-01

    Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device's accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use.

  18. Eye-tracking for clinical decision support: A method to capture automatically what physicians are viewing in the EMR

    PubMed Central

    King, Andrew J.; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F.

    2017-01-01

    Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device’s accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use. PMID:28815151

  19. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    NASA Astrophysics Data System (ADS)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  20. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  1. User Guide for the Anvil Threat Cooridor Forecast Tool V2.4 for AWIPS

    NASA Technical Reports Server (NTRS)

    Barett, Joe H., III; Bauman, William H., III

    2008-01-01

    The Anvil Tool GUI allows users to select a Data Type, toggle the map refresh on/off, place labels, and choose the Profiler Type (source of the KSC 50 MHz profiler data), the Date- Time of the data, the Center of Plot, and the Station (location of the RAOB or 50 MHz profiler). If the Data Type is Models, the user selects a Fcst Hour (forecast hour) instead of Station. There are menus for User Profiles, Circle Label Options, and Frame Label Options. Labels can be placed near the center circle of the plot and/or at a specified distance and direction from the center of the circle (Center of Plot). The default selection for the map refresh is "ON". When the user creates a new Anvil Tool map with Refresh Map "ON, the plot is automatically displayed in the AWIPS frame. If another Anvil Tool map is already displayed and the user does not change the existing map number shown at the bottom of the GUI, the new Anvil Tool map will overwrite the old one. If the user turns the Refresh Map "OFF", the new Anvil Tool map is created but not automatically displayed. The user can still display the Anvil Tool map through the Maps dropdown menu* as shown in Figure 4.

  2. DyKOSMap: A framework for mapping adaptation between biomedical knowledge organization systems.

    PubMed

    Dos Reis, Julio Cesar; Pruski, Cédric; Da Silveira, Marcos; Reynaud-Delaître, Chantal

    2015-06-01

    Knowledge Organization Systems (KOS) and their associated mappings play a central role in several decision support systems. However, by virtue of knowledge evolution, KOS entities are modified over time, impacting mappings and potentially turning them invalid. This requires semi-automatic methods to maintain such semantic correspondences up-to-date at KOS evolution time. We define a complete and original framework based on formal heuristics that drives the adaptation of KOS mappings. Our approach takes into account the definition of established mappings, the evolution of KOS and the possible changes that can be applied to mappings. This study experimentally evaluates the proposed heuristics and the entire framework on realistic case studies borrowed from the biomedical domain, using official mappings between several biomedical KOSs. We demonstrate the overall performance of the approach over biomedical datasets of different characteristics and sizes. Our findings reveal the effectiveness in terms of precision, recall and F-measure of the suggested heuristics and methods defining the framework to adapt mappings affected by KOS evolution. The obtained results contribute and improve the quality of mappings over time. The proposed framework can adapt mappings largely automatically, facilitating thus the maintenance task. The implemented algorithms and tools support and minimize the work of users in charge of KOS mapping maintenance. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Vegetation survey in Amazonia using LANDSAT data. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Shimabukuro, Y. E.; Dossantos, J. R.; Deaquino, L. C. S.

    1982-01-01

    Automatic Image-100 analysis of LANDSAT data was performed using the MAXVER classification algorithm. In the pilot area, four vegetation units were mapped automatically in addition to the areas occupied for agricultural activities. The Image-100 classified results together with a soil map and information from RADAR images, permitted the establishment of the final legend with six classes: semi-deciduous tropical forest; low land evergreen tropical forest; secondary vegetation; tropical forest of humid areas, predominant pastureland and flood plains. Two water types were identified based on their sediments indicating different geological and geomorphological aspects.

  4. a Model Study of Small-Scale World Map Generalization

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  5. An Investigation of Automatic Change Detection for Topographic Map Updating

    NASA Astrophysics Data System (ADS)

    Duncan, P.; Smit, J.

    2012-08-01

    Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.

  6. Does training under consistent mapping conditions lead to automatic attention attraction to targets in search tasks?

    PubMed

    Lefebvre, Christine; Cousineau, Denis; Larochelle, Serge

    2008-11-01

    Schneider and Shiffrin (1977) proposed that training under consistent stimulus-response mapping (CM) leads to automatic target detection in search tasks. Other theories, such as Treisman and Gelade's (1980) feature integration theory, consider target-distractor discriminability as the main determinant of search performance. The first two experiments pit these two principles against each other. The results show that CM training is neither necessary nor sufficient to achieve optimal search performance. Two other experiments examine whether CM trained targets, presented as distractors in unattended display locations, attract attention away from current targets. The results are again found to vary with target-distractor similarity. Overall, the present study strongly suggests that CM training does not invariably lead to automatic attention attraction in search tasks.

  7. Automated Bone Segmentation and Surface Evaluation of a Small Animal Model of Post-Traumatic Osteoarthritis.

    PubMed

    Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D

    2017-05-01

    MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.

  8. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  10. Study of smartphone suitability for mapping of skin chromophores

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Lacis, Matiss; Spigulis, Janis; Berzina, Anna; Valeine, Lauma

    2015-09-01

    RGB (red-green-blue) technique for mapping skin chromophores by smartphones is proposed and studied. Three smartphones of different manufacturers were tested on skin phantoms and in vivo on benign skin lesions using a specially designed light source for illumination. Hemoglobin and melanin indices obtained by these smartphones showed differences in both tests. In vitro tests showed an increment of hemoglobin and melanin indices with the concentration of chromophores in phantoms. In vivo tests indicated higher hemoglobin index in hemangiomas than in nevi and healthy skin, and nevi showed higher melanin index compared to the healthy skin. Smartphones that allow switching off the automatic camera settings provided useful data, while those with "embedded" automatic settings appear to be useless for distant skin chromophore mapping.

  11. Study of smartphone suitability for mapping of skin chromophores.

    PubMed

    Kuzmina, Ilona; Lacis, Matiss; Spigulis, Janis; Berzina, Anna; Valeine, Lauma

    2015-09-01

    RGB (red-green-blue) technique for mapping skin chromophores by smartphones is proposed and studied. Three smartphones of different manufacturers were tested on skin phantoms and in vivo on benign skin lesions using a specially designed light source for illumination. Hemoglobin and melanin indices obtained by these smartphones showed differences in both tests. In vitro tests showed an increment of hemoglobin and melanin indices with the concentration of chromophores in phantoms. In vivo tests indicated higher hemoglobin index in hemangiomas than in nevi and healthy skin, and nevi showed higher melanin index compared to the healthy skin. Smartphones that allow switching off the automatic camera settings provided useful data, while those with “embedded” automatic settings appear to be useless for distant skin chromophore mapping.

  12. Basic forest cover mapping using digitized remote sensor data and automated data processing techniques

    NASA Technical Reports Server (NTRS)

    Coggeshall, M. E.; Hoffer, R. M.

    1973-01-01

    Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.

  13. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery

    PubMed Central

    Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-01-01

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957

  14. Metaphorical mapping between raw-cooked food and strangeness-familiarity in Chinese culture.

    PubMed

    Deng, Xiaohong; Qu, Yuan; Zheng, Huihui; Lu, Yang; Zhong, Xin; Ward, Anne; Li, Zijun

    2017-02-01

    Previous research has demonstrated metaphorical mappings between physical coldness-warmth and social distance-closeness. Since the concepts of interpersonal warmth are frequently expressed in terms of food-related words in Chinese, the present study sought to explore whether the concept of raw-cooked food could be unconsciously and automatically mapped onto strangeness-familiarity. After rating the nutritive value of raw or cooked foods, participants were presented with morphing movies in which their acquaintances gradually transformed into strangers or strangers gradually morphed into acquaintances, and were asked to stop the movies when the combined images became predominantly target faces. The results demonstrated that unconscious and automatic metaphorical mappings between raw-cooked food and strangeness-familiarity exist. This study provides a foundation for testing whether Chinese people can think about interpersonal familiarity using mental representations of raw-cooked food and supports cognitive metaphor theory from a crosslinguistic perspective.

  15. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  16. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery.

    PubMed

    Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-10-11

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.

  17. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef

    2013-12-01

    Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  18. WRIS: a resource information system for wildland management

    Treesearch

    Robert M. Russell; David A. Sharpnack; Elliot Amidon

    1975-01-01

    WRIS (Wildland Resource Information System) is a computer system for processing, storing, retrieving, updating, and displaying geographic data. The polygon, representing a land area boundary, forms the building block of WRIS. Polygons form a map. Maps are digitized manually or by automatic scanning. Computer programs can extract and produce polygon maps and can overlay...

  19. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  20. Application of nonlinear transformations to automatic flight control

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Su, R.; Hunt, L. R.

    1984-01-01

    The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.

  1. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  2. MetaMapping the nursing procedure manual.

    PubMed

    Peace, Jane; Brennan, Patricia Flatley

    2006-01-01

    Nursing procedure manuals are an important resource for practice, but ensuring that the correct procedure can be located when needed is an ongoing challenge. This poster presents an approach used to automatically index nursing procedures with standardized nursing terminology. Although indexing yielded a low number of mappings, examination of successfully mapped terms, incorrect mappings, and unmapped terms reveals important information about the reasons automated indexing fails.

  3. Potential and limitations of webcam images for snow cover monitoring in the Swiss Alps

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2017-04-01

    In Switzerland, several thousands of outdoor webcams are currently connected to the Internet. They deliver freely available images that can be used to analyze snow cover variability on a high spatio-temporal resolution. To make use of this big data source, we have implemented a webcam-based snow cover mapping procedure, which allows to almost automatically derive snow cover maps from such webcam images. As there is mostly no information about the webcams and its parameters available, our registration approach automatically resolves these parameters (camera orientation, principal point, field of view) by using an estimate of the webcams position, the mountain silhouette, and a high-resolution digital elevation model (DEM). Combined with an automatic snow classification and an image alignment using SIFT features, our procedure can be applied to arbitrary images to generate snow cover maps with a minimum of effort. Resulting snow cover maps have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or hidden from webcams' positions. Up to now, we processed images of about 290 webcams from our archive, and evaluated images of 20 webcams using manually selected ground control points (GCPs) to evaluate the mapping accuracy of our procedure. We present methodological limitations and ongoing improvements, show some applications of our snow cover maps, and demonstrate that webcams not only offer a great opportunity to complement satellite-derived snow retrieval under cloudy conditions, but also serve as a reference for improved validation of satellite-based approaches.

  4. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  5. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  6. Significantly Reduced Blood Pressure Measurement Variability for Both Normotensive and Hypertensive Subjects: Effect of Polynomial Curve Fitting of Oscillometric Pulses

    PubMed Central

    Zhu, Mingping; Chen, Aiqing

    2017-01-01

    This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580

  7. A Graphics System for Pole-Zero Map Analysis.

    ERIC Educational Resources Information Center

    Beyer, William Fred, III

    Computer scientists have developed an interactive, graphical display system for pole-zero map analysis. They designed it for use as an educational tool in teaching introductory courses in automatic control systems. The facilities allow the user to specify a control system and an input function in the form of a pole-zero map and then examine the…

  8. Improving Critical Thinking Using Web Based Argument Mapping Exercises with Automated Feedback

    ERIC Educational Resources Information Center

    Butchart, Sam; Forster, Daniella; Gold, Ian; Bigelow, John; Korb, Kevin; Oppy, Graham; Serrenti, Alexandra

    2009-01-01

    In this paper we describe a simple software system that allows students to practise their critical thinking skills by constructing argument maps of natural language arguments. As the students construct their maps of an argument, the system provides automatic, real time feedback on their progress. We outline the background and theoretical framework…

  9. Near-real-time simulation and internet-based delivery of forecast-flood inundation maps using two-dimensional hydraulic modeling--A pilot study for the Snoqualmie River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.

    2002-01-01

    A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.

  10. ShakeCast Manual

    USGS Publications Warehouse

    Lin, Kuo-Wan; Wald, David J.

    2008-01-01

    ShakeCast is a freely available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users? facilities, and generates potential damage assessment notifications, facility damage maps, and other Web-based products for emergency managers and responders.

  11. Using airborne LiDAR in geoarchaeological contexts: Assessment of an automatic tool for the detection and the morphometric analysis of grazing archaeological structures (French Massif Central).

    NASA Astrophysics Data System (ADS)

    Roussel, Erwan; Toumazet, Jean-Pierre; Florez, Marta; Vautier, Franck; Dousteyssier, Bertrand

    2014-05-01

    Airborne laser scanning (ALS) of archaeological regions of interest is nowadays a widely used and established method for accurate topographic and microtopographic survey. The penetration of the vegetation cover by the laser beam allows the reconstruction of reliable digital terrain models (DTM) of forested areas where traditional prospection methods are inefficient, time-consuming and non-exhaustive. The ALS technology provides the opportunity to discover new archaeological features hidden by vegetation and provides a comprehensive survey of cultural heritage sites within their environmental context. However, the post-processing of LiDAR points clouds produces a huge quantity of data in which relevant archaeological features are not easily detectable with common visualizing and analysing tools. Undoubtedly, there is an urgent need for automation of structures detection and morphometric extraction techniques, especially for the "archaeological desert" in densely forested areas. This presentation deals with the development of automatic detection procedures applied to archaeological structures located in the French Massif Central, in the western forested part of the Puy-de-Dôme volcano between 950 and 1100 m a.s.l.. These unknown archaeological sites were discovered by the March 2011 ALS mission and display a high density of subcircular depressions with a corridor access. The spatial organization of these depressions vary from isolated to aggregated or aligned features. Functionally, they appear to be former grazing constructions built from the medieval to the modern period. Similar grazing structures are known in other locations of the French Massif Central (Sancy, Artense, Cézallier) where the ground is vegetation-free. In order to develop a reliable process of automatic detection and mapping of these archaeological structures, a learning zone has been delineated within the ALS surveyed area. The grazing features were mapped and typical morphometric attributes were calculated based on 2 methods: (i) The mapping of the archaeological structures by a human operator using common visualisation tools (DTM, multi-direction hillshading & local relief models) within a GIS environment; (ii) The automatic detection and mapping performed by a recognition algorithm based on a user defined geometric pattern of the grazing structures. The efficiency of the automatic tool has been assessed by comparing the number of structures detected and the morphometric attributes calculated by the two methods. Our results indicate that the algorithm is efficient for the detection and the location of grazing structures. Concerning the morphometric results, there is still a discrepancy between automatic and expert calculations, due to both the expert mapping choices and the algorithm calibration.

  12. 30 CFR 75.1103-5 - Automatic fire warning devices; actions and response.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... level reaches 10 parts per million above the established ambient level at any sensor location, automatic fire sensor and warning device systems shall provide an effective warning signal at the following... endangered and (ii) A map or schematic that shows the locations of sensors, and the intended air flow...

  13. Comparison of Landsat-8, ASTER and Sentinel 1 satellite remote sensing data in automatic lineaments extraction: A case study of Sidi Flah-Bouskour inlier, Moroccan Anti Atlas

    NASA Astrophysics Data System (ADS)

    Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa

    2017-12-01

    Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.

  14. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  15. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    PubMed

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a noninvasive tool to efficiently localize arrhythmias in 3-D. © 2017 American Association of Physicists in Medicine.

  16. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    NASA Astrophysics Data System (ADS)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  17. [The automatic iris map overlap technology in computer-aided iridiagnosis].

    PubMed

    He, Jia-feng; Ye, Hu-nian; Ye, Miao-yuan

    2002-11-01

    In the paper, iridology and computer-aided iridiagnosis technologies are briefly introduced and the extraction method of the collarette contour is then investigated. The iris map can be overlapped on the original iris image based on collarette contour extraction. The research on collarette contour extraction and iris map overlap is of great importance to computer-aided iridiagnosis technologies.

  18. An effective self-assessment based on concept map extraction from test-sheet for personalized learning

    NASA Astrophysics Data System (ADS)

    Liew, Keng-Hou; Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping

    2013-12-01

    Examination is a traditional way to assess learners' learning status, progress and performance after a learning activity. Except the test grade, a test sheet hides some implicit information such as test concepts, their relationships, importance, and prerequisite. The implicit information can be extracted and constructed a concept map for considering (1) the test concepts covered in the same question means these test concepts have strong relationships, and (2) questions in the same test sheet means the test concepts are relative. Concept map has been successfully employed in many researches to help instructors and learners organize relationships among concepts. However, concept map construction depends on experts who need to take effort and time for the organization of the domain knowledge. In addition, the previous researches regarding to automatic concept map construction are limited to consider all learners of a class, which have not considered personalized learning. To cope with this problem, this paper proposes a new approach to automatically extract and construct concept map based on implicit information in a test sheet. Furthermore, the proposed approach also can help learner for self-assessment and self-diagnosis. Finally, an example is given to depict the effectiveness of proposed approach.

  19. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  20. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  1. Automatic mapping of event landslides at basin scale in Taiwan using a Montecarlo approach and synthetic land cover fingerprints

    NASA Astrophysics Data System (ADS)

    Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi

    2017-12-01

    We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.

  2. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE PAGES

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...

    2015-10-09

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  3. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  4. Proceeding of the ACM/IEEE-CS Joint Conference on Digital Libraries (1st, Roanoke, Virginia, June 24-28, 2001).

    ERIC Educational Resources Information Center

    Association for Computing Machinery, New York, NY.

    Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…

  5. Cross-terminology mapping challenges: a demonstration using medication terminological systems.

    PubMed

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer V; Chute, Christopher G; Johnson, Todd R

    2012-08-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems-a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Cross-terminology mapping challenges: A demonstration using medication terminological systems

    PubMed Central

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer; Chute, Christopher G.; Johnson, Todd R.

    2015-01-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems—a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED-CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. PMID:22750536

  7. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  8. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  9. Collaboration spotting for dental science.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-10-06

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to Dental Science. In order to create a Sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro--maxillo--facial critical size defects, namely the use of Porous HydroxyApatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex--vivo of Mesenchymal Stem Cells. We produced the Sociograms for these technologies and the resulting maps are now accessible on--line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state--of--the--art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used for Dental Science and produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for Dental Science research.

  10. Collaboration Spotting for oral medicine.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-09-01

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to oral medicine. In order to create a sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro-maxillo-facial critical size defects, namely the use of porous hydroxyapatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex vivo of mesenchymal stem cells. We produced the sociograms for these technologies and the resulting maps are now accessible on-line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state-of-the-art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used in oral medicine as is produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for oral medicine research.

  11. Real-time Shakemap implementation in Austria

    NASA Astrophysics Data System (ADS)

    Weginger, Stefan; Jia, Yan; Papi Isaba, Maria; Horn, Nikolaus

    2017-04-01

    ShakeMaps provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. They are automatically generated within a few minutes after occurrence of an earthquake. We tested and included the USGS ShakeMap 4.0 (experimental code) based on python in the Antelope real-time system with local modified GMPE and Site Effects based on the conditions in Austria. The ShakeMaps are provided in terms of Intensity, PGA, PGV and PSA. Future presentation of ShakeMap contour lines and Ground Motion Parameter with interactive maps and data exchange over Web-Services are shown.

  12. Landslide Inventory Mapping from Bitemporal 10 m SENTINEL-2 Images Using Change Detection Based Markov Random Field

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Lu, P.; Li, Z.

    2018-04-01

    Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.

  13. Research on the Intensity Analysis and Result Visualization of Construction Land in Urban Planning

    NASA Astrophysics Data System (ADS)

    Cui, J.; Dong, B.; Li, J.; Li, L.

    2017-09-01

    As a fundamental work of urban planning, the intensity analysis of construction land involves many repetitive data processing works that are prone to cause errors or data precision loss, and the lack of efficient methods and tools to visualizing the analysis results in current urban planning. In the research a portable tool is developed by using the Model Builder technique embedded in ArcGIS to provide automatic data processing and rapid result visualization for the works. A series of basic modules provided by ArcGIS are linked together to shape a whole data processing chain in the tool. Once the required data is imported, the analysis results and related maps and graphs including the intensity values and zoning map, the skyline analysis map etc. are produced automatically. Finally the tool is installation-free and can be dispatched quickly between planning teams.

  14. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  15. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  16. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    PubMed

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping.

  17. The NavTrax fleet management system

    NASA Astrophysics Data System (ADS)

    McLellan, James F.; Krakiwsky, Edward J.; Schleppe, John B.; Knapp, Paul L.

    The NavTrax System, a dispatch-type automatic vehicle location and navigation system, is discussed. Attention is given to its positioning, communication, digital mapping, and dispatch center components. The positioning module is a robust GPS (Global Positioning System)-based system integrated with dead reckoning devices by a decentralized-federated filter, making the module fault tolerant. The error behavior and characteristics of GPS, rate gyro, compass, and odometer sensors are discussed. The communications module, as presently configured, utilizes UHF radio technology, and plans are being made to employ a digital cellular telephone system. Polling and automatic smart vehicle reporting are also discussed. The digital mapping component is an intelligent digital single line road network database stored in vector form with full connectivity and address ranges. A limited form of map matching is performed for the purposes of positioning, but its main purpose is to define location once position is determined.

  18. Edge map analysis in chest X-rays for automatic pulmonary abnormality screening.

    PubMed

    Santosh, K C; Vajda, Szilárd; Antani, Sameer; Thoma, George R

    2016-09-01

    Our particular motivator is the need for screening HIV+ populations in resource-constrained regions for the evidence of tuberculosis, using posteroanterior chest radiographs (CXRs). The proposed method is motivated by the observation that abnormal CXRs tend to exhibit corrupted and/or deformed thoracic edge maps. We study histograms of thoracic edges for all possible orientations of gradients in the range [Formula: see text] at different numbers of bins and different pyramid levels, using five different regions-of-interest selection. We have used two CXR benchmark collections made available by the U.S. National Library of Medicine and have achieved a maximum abnormality detection accuracy (ACC) of 86.36 % and area under the ROC curve (AUC) of 0.93 at 1 s per image, on average. We have presented an automatic method for screening pulmonary abnormalities using thoracic edge map in CXR images. The proposed method outperforms previously reported state-of-the-art results.

  19. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    PubMed Central

    2013-01-01

    Abstract Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  20. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  1. DTM-based automatic mapping and fractal clustering of putative mud volcanoes in Arabia Terra craters

    NASA Astrophysics Data System (ADS)

    Pozzobon, R. P.; Mazzarini, F. M.; Massironi, M. M.; Cremonese, G. C.; Rossi, A. P. R.; Pondrelli, M. P.; Marinangeli, L. M.

    2017-09-01

    Arabia Terra is a region of Mars where occurrence of past-water manifests at surface and subsurface. To date, several landforms associated with this activity were recognized and mapped, directly influencing the models of fluid circulation. In particular, within several craters such as Firsoff and an unnamed southern crater, putative mud volcanoes were described by several authors. In fact, numerous mounds (from 30 m of diameter in the case of monogenic cones, up to 3-400 m in the case of coalescing mounds) present an apical vent-like depression, resembling subaerial Azerbaijan mud volcanoes and gryphons. To this date, landform analysis through topographic position index and curvatures based on topography was never attempted. We hereby present a landform classification method suitable for mounds automatic mapping. Their resulting spatial distribution is then studied in terms of self-similar clustering.

  2. Chocolate equals stop. Chocolate-specific inhibition training reduces chocolate intake and go associations with chocolate.

    PubMed

    Houben, Katrijn; Jansen, Anita

    2015-04-01

    Earlier research has demonstrated that food-specific inhibition training wherein food cues are repeatedly and consistently mapped onto stop signals decreases food intake and bodyweight. The mechanisms underlying these training effects, however, remain unclear. It has been suggested that consistently pairing stimuli with stop signals induces automatic stop associations with those stimuli, thereby facilitating automatic, bottom-up inhibition. This study examined this hypothesis with respect to food-inhibition training. Participants performed a training that consistently paired chocolate with no go cues (chocolate/no-go) or with go cues (chocolate/go). Following training, we measured automatic associations between chocolate and stop versus go, as well as food intake and desire to eat. As expected, food that was consistently mapped onto stopping was indeed more associated with stopping versus going afterwards. In replication of previous results, participants in the no-go condition also showed less desire to eat and reduced food intake relative to the go condition. Together these findings support the idea that food-specific inhibition training prompts the development of automatic inhibition associations, which subsequently facilitate inhibitory control over unwanted food-related urges. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Recognition of surface lithologic and topographic patterns in southwest Colorado with ADP techniques

    NASA Technical Reports Server (NTRS)

    Melhorn, W. N.; Sinnock, S.

    1973-01-01

    Analysis of ERTS-1 multispectral data by automatic pattern recognition procedures is applicable toward grappling with current and future resource stresses by providing a means for refining existing geologic maps. The procedures used in the current analysis already yield encouraging results toward the eventual machine recognition of extensive surface lithologic and topographic patterns. Automatic mapping of a series of hogbacks, strike valleys, and alluvial surfaces along the northwest flank of the San Juan Basin in Colorado can be obtained by minimal man-machine interaction. The determination of causes for separable spectral signatures is dependent upon extensive correlation of micro- and macro field based ground truth observations and aircraft underflight data with the satellite data.

  4. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  5. Rheticus Displacement: an Automatic Geo-Information Service Platform for Ground Instabilities Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Chiaradia, M. T.; Samarelli, S.; Agrimano, L.; Lorusso, A. P.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2016-12-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (service-oriented-architecture) model. Due to its architecture, where every functionality is well defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. Rheticus offers a portfolio of services, ranging from the detection and monitoring of geohazards and infrastructural instabilities, to marine water quality monitoring, wildfires detection or land cover monitoring. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (i.e., SPINUA), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub; S1 data are then handled by a mature running processing chain, which is responsible of producing displacement maps immediately usable to measure with sub-centimetric precision movements of coherent points. Examples are provided, concerning the automatic displacement map generation process, as well as the integration of point and distributed scatterers, the integration of multi-sensors displacement maps (e.g., Sentinel-1 IW and COSMO-SkyMed HIMAGE), the combination of displacement rate maps acquired along both ascending and descending passes. ACK: Study carried out in the framework of the FAST4MAP project and co-funded by the Italian Space Agency (Contract n. 2015-020-R.0). Sentinel-1A products provided by ESA. CSK® Products, ASI, provided by ASI under a license to use. Rheticus® is a registered trademark of Planetek Italia srl.

  6. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    NASA Astrophysics Data System (ADS)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  7. Integration of tools for binding archetypes to SNOMED CT.

    PubMed

    Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Ahlfeldt, Hans; Rector, Alan

    2008-10-27

    The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail.

  8. Integration of tools for binding archetypes to SNOMED CT

    PubMed Central

    Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Åhlfeldt, Hans; Rector, Alan

    2008-01-01

    Background The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Methods Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. Results An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Conclusion Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail. PMID:19007444

  9. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  10. Beyond left and right: Automaticity and flexibility of number-space associations.

    PubMed

    Antoine, Sophie; Gevers, Wim

    2016-02-01

    Close links exist between the processing of numbers and the processing of space: relatively small numbers are preferentially associated with a left-sided response while relatively large numbers are associated with a right-sided response (the SNARC effect). Previous work demonstrated that the SNARC effect is triggered in an automatic manner and is highly flexible. Besides the left-right dimension, numbers associate with other spatial response mappings such as close/far responses, where small numbers are associated with a close response and large numbers with a far response. In two experiments we investigate the nature of this association. Associations between magnitude and close/far responses were observed using a magnitude-irrelevant task (Experiment 1: automaticity) and using a variable referent task (Experiment 2: flexibility). While drawing a strong parallel between both response mappings, the present results are also informative with regard to the question about what type of processing mechanism underlies both the SNARC effect and the association between numerical magnitude and close/far response locations.

  11. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  12. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  13. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  14. Perfusion CT in acute stroke: effectiveness of automatically-generated colour maps.

    PubMed

    Ukmar, Maja; Degrassi, Ferruccio; Pozzi Mucelli, Roberta Antea; Neri, Francesca; Mucelli, Fabio Pozzi; Cova, Maria Assunta

    2017-04-01

    To evaluate the accuracy of perfusion CT (pCT) in the definition of the infarcted core and the penumbra, comparing the data obtained from the evaluation of parametric maps [cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT)] with software-generated colour maps. A retrospective analysis was performed to identify patients with suspected acute ischaemic strokes and who had undergone unenhanced CT and pCT carried out within 4.5 h from the onset of the symptoms. A qualitative evaluation of the CBV, CBF and MTT maps was performed, followed by an analysis of the colour maps automatically generated by the software. 26 patients were identified, but a direct CT follow-up was performed only on 19 patients after 24-48 h. In the qualitative analysis, 14 patients showed perfusion abnormalities. Specifically, 29 perfusion deficit areas were detected, of which 15 areas suggested the penumbra and the remaining 14 areas suggested the infarct. As for automatically software-generated maps, 12 patients showed perfusion abnormalities. 25 perfusion deficit areas were identified, 15 areas of which suggested the penumbra and the other 10 areas the infarct. The McNemar's test showed no statistically significant difference between the two methods of evaluation in highlighting infarcted areas proved later at CT follow-up. We demonstrated how pCT provides good diagnostic accuracy in the identification of acute ischaemic lesions. The limits of identification of the lesions mainly lie at the pons level and in the basal ganglia area. Qualitative analysis has proven to be more efficient in identification of perfusion lesions in comparison with software-generated maps. However, software-generated maps have proven to be very useful in the emergency setting. Advances in knowledge: The use of CT perfusion is requested in increasingly more patients in order to optimize the treatment, thanks also to the technological evolution of CT, which now allows a whole-brain study. The need for performing CT perfusion study also in the emergency setting could represent a problem for physicians who are not used to interpreting the parametric maps (CBV, MTT etc.). The software-generated maps could be of value in these settings, helping the less expert physician in the differentiation between different areas.

  15. Automatic Rock Detection and Mapping from HiRISE Imagery

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Adams, Douglas S.; Cheng, Yang

    2008-01-01

    This system includes a C-code software program and a set of MATLAB software tools for statistical analysis and rock distribution mapping. The major functions include rock detection and rock detection validation. The rock detection code has been evolved into a production tool that can be used by engineers and geologists with minor training.

  16. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  17. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.

  18. Bibliometric mapping: eight decades of analytical chemistry, with special focus on the use of mass spectrometry.

    PubMed

    Waaijer, Cathelijn J F; Palmblad, Magnus

    2015-01-01

    In this Feature we use automatic bibliometric mapping tools to visualize the history of analytical chemistry from the 1920s until the present. In particular, we have focused on the application of mass spectrometry in different fields. The analysis shows major shifts in research focus and use of mass spectrometry. We conclude by discussing the application of bibliometric mapping and visualization tools in analytical chemists' research.

  19. A new automatic synthetic aperture radar-based flood mapping application hosted on the European Space Agency's Grid Processing of Demand Fast Access to Imagery environment

    NASA Astrophysics Data System (ADS)

    Matgen, Patrick; Giustarini, Laura; Hostache, Renaud

    2012-10-01

    This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.

  20. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  1. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    PubMed

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  2. Automatic Polyp Detection via A Novel Unified Bottom-up and Top-down Saliency Approach.

    PubMed

    Yuan, Yixuan; Li, Dengwang; Meng, Max Q-H

    2017-07-31

    In this paper, we propose a novel automatic computer-aided method to detect polyps for colonoscopy videos. To find the perceptually and semantically meaningful salient polyp regions, we first segment images into multilevel superpixels. Each level corresponds to different sizes of superpixels. Rather than adopting hand-designed features to describe these superpixels in images, we employ sparse autoencoder (SAE) to learn discriminative features in an unsupervised way. Then a novel unified bottom-up and top-down saliency method is proposed to detect polyps. In the first stage, we propose a weak bottom-up (WBU) saliency map by fusing the contrast based saliency and object-center based saliency together. The contrast based saliency map highlights image parts that show different appearances compared with surrounding areas while the object-center based saliency map emphasizes the center of the salient object. In the second stage, a strong classifier with Multiple Kernel Boosting (MKB) is learned to calculate the strong top-down (STD) saliency map based on samples directly from the obtained multi-level WBU saliency maps. We finally integrate these two stage saliency maps from all levels together to highlight polyps. Experiment results achieve 0.818 recall for saliency calculation, validating the effectiveness of our method. Extensive experiments on public polyp datasets demonstrate that the proposed saliency algorithm performs favorably against state-of-the-art saliency methods to detect polyps.

  3. Automatic public access to documents and maps stored on and internal secure system.

    NASA Astrophysics Data System (ADS)

    Trench, James; Carter, Mary

    2013-04-01

    The Geological Survey of Ireland operates a Document Management System for providing documents and maps stored internally in high resolution and in a high level secure environment, to an external service where the documents are automatically presented in a lower resolution to members of the public. Security is devised through roles and Individual Users where role level and folder level can be set. The application is an electronic document/data management (EDM) system which has a Geographical Information System (GIS) component integrated to allow users to query an interactive map of Ireland for data that relates to a particular area of interest. The data stored in the database consists of Bedrock Field Sheets, Bedrock Notebooks, Bedrock Maps, Geophysical Surveys, Geotechnical Maps & Reports, Groundwater, GSI Publications, Marine, Mine Records, Mineral Localities, Open File, Quaternary and Unpublished Reports. The Konfig application Tool is both an internal and public facing application. It acts as a tool for high resolution data entry which are stored in a high resolution vault. The public facing application is a mirror of the internal application and differs only in that the application furnishes high resolution data into low resolution format which is stored in a low resolution vault thus, making the data web friendly to the end user for download.

  4. DiffNet: automatic differential functional summarization of dE-MAP networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2014-10-01

    The study of genetic interaction networks that respond to changing conditions is an emerging research problem. Recently, Bandyopadhyay et al. (2010) proposed a technique to construct a differential network (dE-MAPnetwork) from two static gene interaction networks in order to map the interaction differences between them under environment or condition change (e.g., DNA-damaging agent). This differential network is then manually analyzed to conclude that DNA repair is differentially effected by the condition change. Unfortunately, manual construction of differential functional summary from a dE-MAP network that summarizes all pertinent functional responses is time-consuming, laborious and error-prone, impeding large-scale analysis on it. To this end, we propose DiffNet, a novel data-driven algorithm that leverages Gene Ontology (go) annotations to automatically summarize a dE-MAP network to obtain a high-level map of functional responses due to condition change. We tested DiffNet on the dynamic interaction networks following MMS treatment and demonstrated the superiority of our approach in generating differential functional summaries compared to state-of-the-art graph clustering methods. We studied the effects of parameters in DiffNet in controlling the quality of the summary. We also performed a case study that illustrates its utility. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Hybrid Semantic Analysis for Mapping Adverse Drug Reaction Mentions in Tweets to Medical Terminology.

    PubMed

    Emadzadeh, Ehsan; Sarker, Abeed; Nikfarjam, Azadeh; Gonzalez, Graciela

    2017-01-01

    Social networks, such as Twitter, have become important sources for active monitoring of user-reported adverse drug reactions (ADRs). Automatic extraction of ADR information can be crucial for healthcare providers, drug manufacturers, and consumers. However, because of the non-standard nature of social media language, automatically extracted ADR mentions need to be mapped to standard forms before they can be used by operational pharmacovigilance systems. We propose a modular natural language processing pipeline for mapping (normalizing) colloquial mentions of ADRs to their corresponding standardized identifiers. We seek to accomplish this task and enable customization of the pipeline so that distinct unlabeled free text resources can be incorporated to use the system for other normalization tasks. Our approach, which we call Hybrid Semantic Analysis (HSA), sequentially employs rule-based and semantic matching algorithms for mapping user-generated mentions to concept IDs in the Unified Medical Language System vocabulary. The semantic matching component of HSA is adaptive in nature and uses a regression model to combine various measures of semantic relatedness and resources to optimize normalization performance on the selected data source. On a publicly available corpus, our normalization method achieves 0.502 recall and 0.823 precision (F-measure: 0.624). Our proposed method outperforms a baseline based on latent semantic analysis and another that uses MetaMap.

  6. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  7. Mapping urban climate zones and quantifying climate behaviors--an application on Toulouse urban area (France).

    PubMed

    Houet, Thomas; Pigeon, Grégoire

    2011-01-01

    Facing the concern of the population to its environment and to climatic change, city planners are now considering the urban climate in their choices of planning. The use of climatic maps, such Urban Climate Zone‑UCZ, is adapted for this kind of application. The objective of this paper is to demonstrate that the UCZ classification, integrated in the World Meteorological Organization guidelines, first can be automatically determined for sample areas and second is meaningful according to climatic variables. The analysis presented is applied on Toulouse urban area (France). Results show first that UCZ differentiate according to air and surface temperature. It has been possible to determine the membership of sample areas to an UCZ using landscape descriptors automatically computed with GIS and remote sensed data. It also emphasizes that climate behavior and magnitude of UCZ may vary from winter to summer. Finally we discuss the influence of climate data and scale of observation on UCZ mapping and climate characterization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  9. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  10. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  11. Automatic circuit interrupter

    NASA Technical Reports Server (NTRS)

    Dwinell, W. S.

    1979-01-01

    In technique, voice circuits connecting crew's cabin to launch station through umbilical connector disconnect automatically unused, or deadened portion of circuits immediately after vehicle is launched, eliminating possibility that unused wiring interferes with voice communications inside vehicle or need for manual cutoff switch and its associated wiring. Technique is applied to other types of electrical actuation circuits, also launch of mapped vehicles, such as balloons, submarines, test sleds, and test chambers-all requiring assistance of ground crew.

  12. Use of an automatic earth resistivity system for detection of abandoned mine workings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, W.R.; Burdick, R.

    1982-04-01

    Under the sponsorship of the US Bureau of Mines, a surface-operated automatic high resolution earth resistivity system and associated computer data processing techniques have been designed and constructed for use as a potential means of detecting abandoned coal mine workings. The hardware and software aspects of the new system are described together with applications of the method to the survey and mapping of abandoned mine workings.

  13. Open-Source Programming for Automated Generation of Graphene Raman Spectral Maps

    NASA Astrophysics Data System (ADS)

    Vendola, P.; Blades, M.; Pierre, W.; Jedlicka, S.; Rotkin, S. V.

    Raman microscopy is a useful tool for studying the structural characteristics of graphene deposited onto substrates. However, extracting useful information from the Raman spectra requires data processing and 2D map generation. An existing home-built confocal Raman microscope was optimized for graphene samples and programmed to automatically generate Raman spectral maps across a specified area. In particular, an open source data collection scheme was generated to allow the efficient collection and analysis of the Raman spectral data for future use. NSF ECCS-1509786.

  14. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    NASA Astrophysics Data System (ADS)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  15. An automatic locating system for cloud-to-ground lightning. [which utilizes a microcomputer

    NASA Technical Reports Server (NTRS)

    Krider, E. P.; Pifer, A. E.; Uman, M. A.

    1980-01-01

    Automatic locating systems which respond to cloud to ground lightning and which discriminate against cloud discharges and background noise are described. Subsystems of the locating system, which include the direction finder and the position analyzer, are discussed. The direction finder senses the electromagnetic fields radiated by lightning on two orthogonal magnetic loop antennas and on a flat plate electric antenna. The position analyzer is a preprogrammed microcomputer system which automatically computes, maps, and records lightning locations in real time using data inputs from the direction finder. The use of the locating systems for wildfire management and fire weather forecasting is discussed.

  16. Lithology and aggregate quality attributes for the digital geologic map of Colorado

    USGS Publications Warehouse

    Knepper, Daniel H.; Green, Gregory N.; Langer, William H.

    1999-01-01

    This geologic map was prepared as a part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Colorado (Tweto 1979). Consequently the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the state base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1979 geologic map.

  17. Statewide Cellular Coverage Map

    DOT National Transportation Integrated Search

    2002-02-01

    The role of wireless communications in transportation is becoming increasingly important. Wireless communications are critical for many applications of Intelligent Transportation Systems (ITS) such as Automatic Vehicle Location (AVL) and Automated Co...

  18. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  19. 40 CFR 1065.510 - Engine mapping.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Configure any auxiliary work inputs and outputs such as hybrid, turbo-compounding, or thermoelectric systems... intended primarily for propulsion of a vehicle with an automatic transmission where that engine is subject...

  20. Glacier Surface Lowering and Stagnation in the Manaslu Region of Nepal

    NASA Astrophysics Data System (ADS)

    Robson, B. A.; Nuth, C.; Nielsen, P. R.; Hendrickx, M.; Dahl, S. O.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  1. Computer vision-based diameter maps to study fluoroscopic recordings of small intestinal motility from conscious experimental animals.

    PubMed

    Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R

    2017-08-01

    When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.

  2. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.

  3. Building the Joint Battlespace Infosphere. Volume 1: Summary

    DTIC Science & Technology

    1999-12-17

    portable devices , including wearable computer technology for mobile or field application 7.1.4.4.3 The Far Term (2009) The technology will be...graphic on a 2-D map image, or change the list of weapons to be loaded on an F/A-18, or sound an audible alarm in conjunction with flashing red...information automatically through a subscribe process. (3) At the same time, published information can be automatically changed into a new representation or

  4. Satellite image based methods for fuels maps updating

    NASA Astrophysics Data System (ADS)

    Alonso-Benito, Alfonso; Hernandez-Leal, Pedro A.; Arbelo, Manuel; Gonzalez-Calvo, Alejandro; Moreno-Ruiz, Jose A.; Garcia-Lazaro, Jose R.

    2016-10-01

    Regular updating of fuels maps is important for forest fire management. Nevertheless complex and time consuming field work is usually necessary for this purpose, which prevents a more frequent update. That is why the assessment of the usefulness of satellite data and the development of remote sensing techniques that enable the automatic updating of these maps, is of vital interest. In this work, we have tested the use of the spectral bands of OLI (Operational Land Imager) sensor on board Landsat 8 satellite, for updating the fuels map of El Hierro Island (Spain). From previously digitized map, a set of 200 reference plots for different fuel types was created. A 50% of the plots were randomly used as a training set and the rest were considered for validation. Six supervised and 2 unsupervised classification methods were applied, considering two levels of detail. A first level with only 5 classes (Meadow, Brushwood, Undergrowth canopy cover >50%, Undergrowth canopy cover <15%, and Xeric formations), and the second one containing 19 fuel types. The level 1 classification methods yielded an overall accuracy ranging from 44% for Parellelepided to an 84% for Maximun Likelihood. Meanwhile, level 2 results showed at best, an unacceptable overall accuracy of 34%, which prevents the use of this data for such a detailed characterization. Anyway it has been demonstrated that in some conditions, images of medium spatial resolution, like Landsat 8-OLI, could be a valid tool for an automatic upgrade of fuels maps, minimizing costs and complementing traditional methodologies.

  5. eWaterCycle visualisation. combining the strength of NetCDF and Web Map Service: ncWMS

    NASA Astrophysics Data System (ADS)

    Hut, R.; van Meersbergen, M.; Drost, N.; Van De Giesen, N.

    2016-12-01

    As a result of the eWatercycle global hydrological forecast we have created Cesium-ncWMS, a web application based on ncWMS and Cesium. ncWMS is a server side application capable of reading any NetCDF file written using the Climate and Forecasting (CF) conventions, and making the data available as a Web Map Service(WMS). ncWMS automatically determines available variables in a file, and creates maps colored according to map data and a user selected color scale. Cesium is a Javascript 3D virtual Globe library. It uses WebGL for rendering, which makes it very fast, and it is capable of displaying a wide variety of data types such as vectors, 3D models, and 2D maps. The forecast results are automatically uploaded to our web server running ncWMS. In turn, the web application can be used to change the settings for color maps and displayed data. The server uses the settings provided by the web application, together with the data in NetCDF to provide WMS image tiles, time series data and legend graphics to the Cesium-NcWMS web application. The user can simultaneously zoom in to the very high resolution forecast results anywhere on the world, and get time series data for any point on the globe. The Cesium-ncWMS visualisation combines a global overview with local relevant information in any browser. See the visualisation live at forecast.ewatercycle.org

  6. IntegratedMap: a Web interface for integrating genetic map data.

    PubMed

    Yang, Hongyu; Wang, Hongyu; Gingle, Alan R

    2005-05-01

    IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp

  7. a Conceptual Framework for Indoor Mapping by Using Grammars

    NASA Astrophysics Data System (ADS)

    Hu, X.; Fan, H.; Zipf, A.; Shang, J.; Gu, F.

    2017-09-01

    Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users' location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  8. Rapid Mapping Of Floods Using SAR Data: Opportunities And Critical Aspects

    NASA Astrophysics Data System (ADS)

    Pulvirenti, Luca; Pierdicca, Nazzareno; Chini, Marco

    2013-04-01

    The potentiality of spaceborne Synthetic Aperture Radar (SAR) for flood mapping was demonstrated by several past investigations. The synoptic view, the capability to operate in almost all-weather conditions and during both day time and night time and the sensitivity of the microwave band to water are the key features that make SAR data useful for monitoring inundation events. In addition, their high spatial resolution, which can reach 1m with the new generation of X-band instruments such as TerraSAR-X and COSMO-SkyMed (CSK), allows emergency managers to use flood maps at very high spatial resolution. CSK gives also the possibility of performing frequent observations of regions hit by floods, thanks to the four-satellite constellation. Current research on flood mapping using SAR is focused on the development of automatic algorithms to be used in near real time applications. The approaches are generally based on the low radar return from smooth open water bodies that behave as specular reflectors and appear dark in SAR images. The major advantage of automatic algorithms is the computational efficiency that makes them suitable for rapid mapping purposes. The choice of the threshold value that, in this kind of algorithms, separates flooded from non-flooded areas is a critical aspect because it depends on the characteristics of the observed scenario and on system parameters. To deal with this aspect an algorithm for automatic detection of the regions of low backscatter has been developed. It basically accomplishes three steps: 1) division of the SAR image in a set of non-overlapping sub-images or splits; 2) selection of inhomogeneous sub-images that contain (at least) two populations of pixels, one of which is formed by dark pixels; 3) the application in sequence of an automatic thresholding algorithm and a region growing algorithm in order to produce a homogeneous map of flooded areas. Besides the aforementioned choice of the threshold, rapid mapping of floods may present other critical aspects. Searching for low SAR backscatter areas only may cause inaccuracies because flooded soils do not always act as smooth open water bodies. The presence of wind or of vegetation emerging above the water surface may give rise to an increase of the radar backscatter. In particular, mapping flooded vegetation using SAR data may represent a difficult task since backscattering phenomena in the volume between canopy, trunks and floodwater are quite complex in the presence of vegetation. A typical phenomenon is the double-bounce effect involving soil and stems or trunks, which is generally enhanced by the floodwater, so that flooded vegetation may appear very bright in a SAR image. Even in the absence of dense vegetation or wind, some regions may appear dark because of artefacts due to topography (shadowing), absorption caused by wet snow, and attenuation caused by heavy precipitating clouds (X-band SARs). Examples of the aforementioned effects that may limit the reliability of flood maps will be presented at the conference and some indications to deal with these effects (e.g. presence of vegetation and of artefacts) will be provided.

  9. Fusion of multi-source remote sensing data for agriculture monitoring tasks

    NASA Astrophysics Data System (ADS)

    Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.

    2016-12-01

    Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.

  10. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  11. A Computational Solution to Automatically Map Metabolite Libraries in the Context of Genome Scale Metabolic Networks.

    PubMed

    Merlet, Benjamin; Paulhe, Nils; Vinson, Florence; Frainay, Clément; Chazalviel, Maxime; Poupin, Nathalie; Gloaguen, Yoann; Giacomoni, Franck; Jourdan, Fabien

    2016-01-01

    This article describes a generic programmatic method for mapping chemical compound libraries on organism-specific metabolic networks from various databases (KEGG, BioCyc) and flat file formats (SBML and Matlab files). We show how this pipeline was successfully applied to decipher the coverage of chemical libraries set up by two metabolomics facilities MetaboHub (French National infrastructure for metabolomics and fluxomics) and Glasgow Polyomics (GP) on the metabolic networks available in the MetExplore web server. The present generic protocol is designed to formalize and reduce the volume of information transfer between the library and the network database. Matching of metabolites between libraries and metabolic networks is based on InChIs or InChIKeys and therefore requires that these identifiers are specified in both libraries and networks. In addition to providing covering statistics, this pipeline also allows the visualization of mapping results in the context of metabolic networks. In order to achieve this goal, we tackled issues on programmatic interaction between two servers, improvement of metabolite annotation in metabolic networks and automatic loading of a mapping in genome scale metabolic network analysis tool MetExplore. It is important to note that this mapping can also be performed on a single or a selection of organisms of interest and is thus not limited to large facilities.

  12. Automated crystallographic ligand building using the medial axis transform of an electron-density isosurface.

    PubMed

    Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T

    2005-10-01

    Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.

  13. Automatic Detection and Evaluation of Solar Cell Micro-Cracks in Electroluminescence Images Using Matched Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    A method for detecting micro-cracks in solar cells using two dimensional matched filters was developed, derived from the electroluminescence intensity profile of typical micro-cracks. We describe the image processing steps to obtain a binary map with the location of the micro-cracks. Finally, we show how to automatically estimate the total length of each micro-crack from these maps, and propose a method to identify severe types of micro-cracks, such as parallel, dendritic, and cracks with multiple orientations. With an optimized threshold parameter, the technique detects over 90 % of cracks larger than 3 cm in length. The method shows great potentialmore » for quantifying micro-crack damage after manufacturing or module transportation for the determination of a module quality criterion for cell cracking in photovoltaic modules.« less

  14. Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2016-01-01

    In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080

  15. A CityGML extension for traffic-sign objects that guides the automatic processing of data collected using Mobile Mapping technology

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; Riveiro, B.; Arias-Sánchez, P.; González-Jorge, H.; Martínez-Sánchez, J.

    2014-11-01

    The rapid evolution of integral schemes, accounting for geometric and semantic data, has been importantly motivated by the advances in the last decade in mobile laser scanning technology; automation in data processing has also recently influenced the expansion of the new model concepts. This paper reviews some important issues involved in the new paradigms of city 3D modelling: an interoperable schema for city 3D modelling (cityGML) and mobile mapping technology to provide the features that composing the city model. This paper focuses in traffic signs, discussing their characterization using cityGML in order to ease the implementation of LiDAR technology in road management software, as well as analysing some limitations of the current technology in the labour of automatic detection and classification.

  16. Automated kidney detection for 3D ultrasound using scan line searching

    NASA Astrophysics Data System (ADS)

    Noll, Matthias; Nadolny, Anne; Wesarg, Stefan

    2016-04-01

    Ultrasound (U/S) is a fast and non-expensive imaging modality that is used for the examination of various anatomical structures, e.g. the kidneys. One important task for automatic organ tracking or computer-aided diagnosis is the identification of the organ region. During this process the exact information about the transducer location and orientation is usually unavailable. This renders the implementation of such automatic methods exceedingly challenging. In this work we like to introduce a new automatic method for the detection of the kidney in 3D U/S images. This novel technique analyses the U/S image data along virtual scan lines. Here, characteristic texture changes when entering and leaving the symmetric tissue regions of the renal cortex are searched for. A subsequent feature accumulation along a second scan direction produces a 2D heat map of renal cortex candidates, from which the kidney location is extracted in two steps. First, the strongest candidate as well as its counterpart are extracted by heat map intensity ranking and renal cortex size analysis. This process exploits the heat map gap caused by the renal pelvis region. Substituting the renal pelvis detection with this combined cortex tissue feature increases the detection robustness. In contrast to model based methods that generate characteristic pattern matches, our method is simpler and therefore faster. An evaluation performed on 61 3D U/S data sets showed, that in 55 cases showing none or minor shadowing the kidney location could be correctly identified.

  17. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    NASA Astrophysics Data System (ADS)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  18. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  19. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  20. Automatic Registration of Scanned Satellite Imagery with a Digital Map Data Base.

    DTIC Science & Technology

    1980-11-01

    define the corresponding map window (mW)(procedure TRANSFORMWINDOW MAP A-- S4S Araofms Cpo iin et Serc Area deiatl compAr tal _______________ T...to a LIST-item). LIN: = ( ® code 2621431 ; ® pointer LA to the line list, © pointer PRI; pointer PR2), LIST: = ( Q pointer PL to a LIN-item; n pointer...items where PL -pointers are replaced by a code for the beginning (the number 262140 in our case) and end (the number 26241). Figure 3.2 illustra- tes a

  1. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  2. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  3. Automatic Gleason grading of prostate cancer using SLIM and machine learning

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan H.; Sridharan, Shamira; Marcias, Virgilia; Balla, Andre K.; Do, Minh N.; Popescu, Gabriel

    2016-03-01

    In this paper, we present an updated automatic diagnostic procedure for prostate cancer using quantitative phase imaging (QPI). In a recent report [1], we demonstrated the use of Random Forest for image segmentation on prostate cores imaged using QPI. Based on these label maps, we developed an algorithm to discriminate between regions with Gleason grade 3 and 4 prostate cancer in prostatectomy tissue. The Area-Under-Curve (AUC) of 0.79 for the Receiver Operating Curve (ROC) can be obtained for Gleason grade 4 detection in a binary classification between Grade 3 and Grade 4. Our dataset includes 280 benign cases and 141 malignant cases. We show that textural features in phase maps have strong diagnostic values since they can be used in combination with the label map to detect presence or absence of basal cells, which is a strong indicator for prostate carcinoma. A support vector machine (SVM) classifier trained on this new feature vector can classify cancer/non-cancer with an error rate of 0.23 and an AUC value of 0.83.

  4. MultiMap: A Tool to Automatically Extract and Analyse Spatial Microscopic Data From Large Stacks of Confocal Microscopy Images

    PubMed Central

    Varando, Gherardo; Benavides-Piccione, Ruth; Muñoz, Alberto; Kastanauskaite, Asta; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier

    2018-01-01

    The development of 3D visualization and reconstruction methods to analyse microscopic structures at different levels of resolutions is of great importance to define brain microorganization and connectivity. MultiMap is a new tool that allows the visualization, 3D segmentation and quantification of fluorescent structures selectively in the neuropil from large stacks of confocal microscopy images. The major contribution of this tool is the posibility to easily navigate and create regions of interest of any shape and size within a large brain area that will be automatically 3D segmented and quantified to determine the density of puncta in the neuropil. As a proof of concept, we focused on the analysis of glutamatergic and GABAergic presynaptic axon terminals in the mouse hippocampal region to demonstrate its use as a tool to provide putative excitatory and inhibitory synaptic maps. The segmentation and quantification method has been validated over expert labeled images of the mouse hippocampus and over two benchmark datasets, obtaining comparable results to the expert detections. PMID:29875639

  5. MultiMap: A Tool to Automatically Extract and Analyse Spatial Microscopic Data From Large Stacks of Confocal Microscopy Images.

    PubMed

    Varando, Gherardo; Benavides-Piccione, Ruth; Muñoz, Alberto; Kastanauskaite, Asta; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier

    2018-01-01

    The development of 3D visualization and reconstruction methods to analyse microscopic structures at different levels of resolutions is of great importance to define brain microorganization and connectivity. MultiMap is a new tool that allows the visualization, 3D segmentation and quantification of fluorescent structures selectively in the neuropil from large stacks of confocal microscopy images. The major contribution of this tool is the posibility to easily navigate and create regions of interest of any shape and size within a large brain area that will be automatically 3D segmented and quantified to determine the density of puncta in the neuropil. As a proof of concept, we focused on the analysis of glutamatergic and GABAergic presynaptic axon terminals in the mouse hippocampal region to demonstrate its use as a tool to provide putative excitatory and inhibitory synaptic maps. The segmentation and quantification method has been validated over expert labeled images of the mouse hippocampus and over two benchmark datasets, obtaining comparable results to the expert detections.

  6. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  7. Approach of automatic 3D geological mapping: the case of the Kovdor phoscorite-carbonatite complex, NW Russia.

    PubMed

    Kalashnikov, A O; Ivanyuk, G Yu; Mikhailova, J A; Sokharev, V A

    2017-07-31

    We have developed an approach for automatic 3D geological mapping based on conversion of chemical composition of rocks to mineral composition by logical computation. It allows to calculate mineral composition based on bulk rock chemistry, interpolate the mineral composition in the same way as chemical composition, and, finally, build a 3D geological model. The approach was developed for the Kovdor phoscorite-carbonatite complex containing the Kovdor baddeleyite-apatite-magnetite deposit. We used 4 bulk rock chemistry analyses - Fe magn , P 2 O 5 , CO 2 and SiO 2 . We used four techniques for prediction of rock types - calculation of normative mineral compositions (norms), multiple regression, artificial neural network and developed by logical evaluation. The two latter became the best. As a result, we distinguished 14 types of phoscorites (forsterite-apatite-magnetite-carbonate rock), carbonatite and host rocks. The results show good convergence with our petrographical studies of the deposit, and recent manually built maps. The proposed approach can be used as a tool of a deposit genesis reconstruction and preliminary geometallurgical modelling.

  8. Stacked Multilayer Self-Organizing Map for Background Modeling.

    PubMed

    Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun

    2015-09-01

    In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach.

  9. Small passenger car transmission test; Ford C4 transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1980-01-01

    A 1979 Ford C4 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. Under these test conditions, the transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. The major results of this test (torque, speed, and efficiency curves) are presented. Graphs map the complete performance characteristics for the Ford C4 transmission.

  10. Classifying forest and nonforest land on space photographs

    NASA Technical Reports Server (NTRS)

    Aldrich, R. C.

    1970-01-01

    Although the research reported is in its preliminary stages, results show that: (1) infrared color film is the best single multiband sensor available; (2) there is a good possibility that forest can be separated from all nonforest land uses by microimage evaluation techniques on IR color film coupled with B/W infrared and panchromatic films; and (3) discrimination of forest and nonforest classes is possible by either of two methods: interpreters with appropriate viewing and mapping instruments, or programmable automatic scanning microdensitometers and automatic data processing.

  11. Citrus Inventory

    NASA Technical Reports Server (NTRS)

    1994-01-01

    An aerial color infrared (CIR) mapping system developed by Kennedy Space Center enables Florida's Charlotte County to accurately appraise its citrus groves while reducing appraisal costs. The technology was further advanced by development of a dual video system making it possible to simultaneously view images of the same area and detect changes. An image analysis system automatically surveys and photo interprets grove images as well as automatically counts trees and reports totals. The system, which saves both time and money, has potential beyond citrus grove valuation.

  12. Use of an automatic resistivity system for detecting abandoned mine workings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, W.R.; Burdick, R.G.

    1983-01-01

    A high-resolution earth resistivity system has been designed and constructed for use as a means of detecting abandoned coal mine workings. The automatic pole-dipole earth resistivity technique has already been applied to the detection of subsurface voids for military applications. The hardware and software of the system are described, together with applications for surveying and mapping abandoned coal mine workings. Field tests are presented to illustrate the detection of both air-filled and water-filled mine workings.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Jiangye

    Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less

  14. LANDSAT and radar mapping of intrusive rocks in SE-Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dossantos, A. R.; Dosanjos, C. E.; Moreira, J. C.; Barbosa, M. P.; Veneziani, P.

    1982-01-01

    The feasibility of intrusive rock mapping was investigated and criteria for regional geological mapping established at the scale of 1:500,00 in polycyclic and polymetamorphic areas using the logic method of photointerpretation of LANDSAT imagery and radar from the RADAMBRASIL project. The spectral behavior of intrusive rocks, was evaluated using the interactive multispectral image analysis system (Image-100). The region of Campos (city) in northern Rio de Janeiro State was selected as the study area and digital imagery processing and pattern recognition techniques were applied. Various maps at the 2:250,000 scale were obtained to evaluate the results of automatic data processing.

  15. Advantages of estimating rate corrections during dynamic propagation of spacecraft rates: Applications to real-time attitude determination of SAMPEX

    NASA Technical Reports Server (NTRS)

    Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.

    1994-01-01

    This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.

  16. Calculation of three-dimensional, inviscid, supersonic, steady flows

    NASA Technical Reports Server (NTRS)

    Moretti, G.

    1981-01-01

    A detailed description of a computational program for the evaluation of three dimensional supersonic, inviscid, steady flow past airplanes is presented. Emphasis was put on how a powerful, automatic mapping technique is coupled to the fluid mechanical analysis. Each of the three constituents of the analysis (body geometry, mapping technique, and gas dynamical effects) was carefully coded and described. Results of computations based on sample geometrics and discussions are also presented.

  17. Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps.

    PubMed

    Fernández-Esparrach, Glòria; Bernal, Jorge; López-Cerón, Maria; Córdova, Henry; Sánchez-Montes, Cristina; Rodríguez de Miguel, Cristina; Sánchez, Francisco Javier

    2016-09-01

    Polyp miss-rate is a drawback of colonoscopy that increases significantly for small polyps. We explored the efficacy of an automatic computer-vision method for polyp detection. Our method relies on a model that defines polyp boundaries as valleys of image intensity. Valley information is integrated into energy maps that represent the likelihood of the presence of a polyp. In 24 videos containing polyps from routine colonoscopies, all polyps were detected in at least one frame. The mean of the maximum values on the energy map was higher for frames with polyps than without (P < 0.001). Performance improved in high quality frames (AUC = 0.79 [95 %CI 0.70 - 0.87] vs. 0.75 [95 %CI 0.66 - 0.83]). With 3.75 set as the maximum threshold value, sensitivity and specificity for the detection of polyps were 70.4 % (95 %CI 60.3 % - 80.8 %) and 72.4 % (95 %CI 61.6 % - 84.6 %), respectively. Energy maps performed well for colonic polyp detection, indicating their potential applicability in clinical practice. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Automatic Pedestrian Crossing Detection and Impairment Analysis Based on Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, Y.; Li, Q.

    2017-09-01

    Pedestrian crossing, as an important part of transportation infrastructures, serves to secure pedestrians' lives and possessions and keep traffic flow in order. As a prominent feature in the street scene, detection of pedestrian crossing contributes to 3D road marking reconstruction and diminishing the adverse impact of outliers in 3D street scene reconstruction. Since pedestrian crossing is subject to wearing and tearing from heavy traffic flow, it is of great imperative to monitor its status quo. On this account, an approach of automatic pedestrian crossing detection using images from vehicle-based Mobile Mapping System is put forward and its defilement and impairment are analyzed in this paper. Firstly, pedestrian crossing classifier is trained with low recall rate. Then initial detections are refined by utilizing projection filtering, contour information analysis, and monocular vision. Finally, a pedestrian crossing detection and analysis system with high recall rate, precision and robustness will be achieved. This system works for pedestrian crossing detection under different situations and light conditions. It can recognize defiled and impaired crossings automatically in the meanwhile, which facilitates monitoring and maintenance of traffic facilities, so as to reduce potential traffic safety problems and secure lives and property.

  19. Polarization transformation as an algorithm for automatic generalization and quality assessment

    NASA Astrophysics Data System (ADS)

    Qian, Haizhong; Meng, Liqiu

    2007-06-01

    Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.

  20. Adaptive video-based vehicle classification technique for monitoring traffic : [executive summary].

    DOT National Transportation Integrated Search

    2015-08-01

    Federal Highway Administration (FHWA) recommends axle-based classification standards to map : passenger vehicles, single unit trucks, and multi-unit trucks, at Automatic Traffic Recorder (ATR) stations : statewide. Many state Departments of Transport...

  1. Clustering of color map pixels: an interactive approach

    NASA Astrophysics Data System (ADS)

    Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo

    2003-12-01

    The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.

  2. Development of a Global Agricultural Hotspot Detection and Early Warning System

    NASA Astrophysics Data System (ADS)

    Lemoine, G.; Rembold, F.; Urbano, F.; Csak, G.

    2015-12-01

    The number of web based platforms for crop monitoring has grown rapidly over the last years and anomaly maps and time profiles of remote sensing derived indicators can be accessed online thanks to a number of web based portals. However, while these systems make available a large amount of crop monitoring data to the agriculture and food security analysts, there is no global platform which provides agricultural production hotspot warning in a highly automatic and timely manner. Therefore a web based system providing timely warning evidence as maps and short narratives is currently under development by the Joint Research Centre. The system (called "HotSpot Detection System of Agriculture Production Anomalies", HSDS) will focus on water limited agricultural systems worldwide. The automatic analysis of relevant meteorological and vegetation indicators at selected administrative units (Gaul 1 level) will trigger warning messages for the areas where anomalous conditions are observed. The level of warning (ranging from "watch" to "alert") will depend on the nature and number of indicators for which an anomaly is detected. Information regarding the extent of the agricultural areas concerned by the anomaly and the progress of the agricultural season will complement the warning label. In addition, we are testing supplementary detailed information from other sources for the areas triggering a warning. These regard the automatic web-based and food security-tailored analysis of media (using the JRC Media Monitor semantic search engine) and the automatic detection of active crop area using Sentinel 1, upcoming Sentinel-2 and Landsat 8 imagery processed in Google Earth Engine. The basic processing will be fully automated and updated every 10 days exploiting low resolution rainfall estimates and satellite vegetation indices. Maps, trend graphs and statistics accompanied by short narratives edited by a team of crop monitoring experts, will be made available on the website on a monthly basis.

  3. Implementing automatic LiDAR and supervised mapping methodologies to quantify agricultural terraced landforms at landscape scale: the case of Veneto Region

    NASA Astrophysics Data System (ADS)

    Eugenio Pappalardo, Salvatore; Ferrarese, Francesco; Tarolli, Paolo; Varotto, Mauro

    2016-04-01

    Traditional agricultural terraced landscapes presently embody an important cultural value to be deeply investigated, both for their role in local heritage and cultural economy and for their potential geo-hydrological hazard due to abandonment and degradation. Moreover, traditional terraced landscapes are usually based on non-intensive agro-systems and may enhance some important ecosystems services such as agro-biodiversity conservation and cultural services. Due to their unplanned genesis, mapping, quantifying and classifying agricultural terraces at regional scale is often critical as far as they are usually set up on geomorphologically and historically complex landscapes. Hence, traditional mapping methods are generally based on scientific literature and local documentation, historical and cadastral sources, technical cartography and aerial images visual interpretation or, finally, field surveys. By this, limitations and uncertainty in mapping at regional scale are basically related to forest cover and lack in thematic cartography. The Veneto Region (NE of Italy) presents a wide heterogeneity of agricultural terraced landscapes, mainly distributed within the hilly and Prealps areas. Previous studies performed by traditional mapping method quantified 2,688 ha of terraced areas, showing the higher values within the Prealps of Lessinia (1,013 ha, within the Province of Verona) and in the Brenta Valley (421 ha, within the Province of Vicenza); however, terraced features of these case studies show relevant differences in terms of fragmentation and intensity of terraces, highlighting dissimilar degrees of clusterization: 1.7 ha on one hand (Province of Verona) and 1.2 ha per terraced area (Province of Vicenza) on the other one. The aim of this paper is to implement and to compare automatic methodologies with traditional survey methodologies to map and assess agricultural terraces in two representative areas of the Veneto Region. Testing different Remote Sensing analyses such as LiDAR topography survey and visual interpretation from aerial orthophotos (RGB+NIR bands) we performed a territorial analysis in the Lessinia and Brenta Valley case studies. Preliminary results show that terraced feature extraction by automatic LiDAR survey is more efficient both in identifying geometries (walls and terraced surfaces) and in quantifying features under the forest canopy; however, traditional mapping methodology confirms its strength by matching different methods and different data such as aerial photo, visual interpretation, maps and field surveys. Hence, the two methods here compared represent a cross-validation and let us to better know the complexity of this kind of landscape.

  4. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  5. fMRat: an extension of SPM for a fully automatic analysis of rodent brain functional magnetic resonance series.

    PubMed

    Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel

    2016-05-01

    The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .

  6. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  7. Combining Quantitative Susceptibility Mapping with Automatic Zero Reference (QSM0) and Myelin Water Fraction Imaging to Quantify Iron-Related Myelin Damage in Chronic Active MS Lesions.

    PubMed

    Yao, Y; Nguyen, T D; Pandya, S; Zhang, Y; Hurtado Rúa, S; Kovanlikaya, I; Kuceyeski, A; Liu, Z; Wang, Y; Gauthier, S A

    2018-02-01

    A hyperintense rim on susceptibility in chronic MS lesions is consistent with iron deposition, and the purpose of this study was to quantify iron-related myelin damage within these lesions as compared with those without rim. Forty-six patients had 2 longitudinal quantitative susceptibility mapping with automatic zero reference scans with a mean interval of 28.9 ± 11.4 months. Myelin water fraction mapping by using fast acquisition with spiral trajectory and T2 prep was obtained at the second time point to measure myelin damage. Mixed-effects models were used to assess lesion quantitative susceptibility mapping and myelin water fraction values. Quantitative susceptibility mapping scans were on average 6.8 parts per billion higher in 116 rim-positive lesions compared with 441 rim-negative lesions ( P < .001). All rim-positive lesions retained a hyperintense rim over time, with increasing quantitative susceptibility mapping values of both the rim and core regions ( P < .001). Quantitative susceptibility mapping scans and myelin water fraction in rim-positive lesions decreased from rim to core, which is consistent with rim iron deposition. Whole lesion myelin water fractions for rim-positive and rim-negative lesions were 0.055 ± 0.07 and 0.066 ± 0.04, respectively. In the mixed-effects model, rim-positive lesions had on average 0.01 lower myelin water fraction compared with rim-negative lesions ( P < .001). The volume of the rim at the initial quantitative susceptibility mapping scan was negatively associated with follow-up myelin water fraction ( P < .01). Quantitative susceptibility mapping rim-positive lesions maintained a hyperintense rim, increased in susceptibility, and had more myelin damage compared with rim-negative lesions. Our results are consistent with the identification of chronic active MS lesions and may provide a target for therapeutic interventions to reduce myelin damage. © 2018 by American Journal of Neuroradiology.

  8. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  9. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  10. Wheat cultivation: Identification and estimation of areas using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Cottrell, D. A.; Tardin, A. T.; Lee, D. C. L.; Shimabukuro, Y. E.; Moreira, M. A.; Delimaefernandocelsosoaresmaia, A. M.

    1981-01-01

    The feasibility of using automatically processed multispectral data obtained from LANDSAT to identify wheat and estimate the areas planted with this grain was investigated. Three 20 km by 40 km segments in a wheat growing region of Rio Grande do Sul were aerially photographed using type 2443 Aerochrome film. Three maps corresponding to each segment were obtained from the analysis of the photographs which identified wheat, barley, fallow land, prepared soil, forests, and reforested land. Using basic information about the fields and maps made from the photographed areas, an automatic classification of wheat was made using MSS data from two different periods: July to September and July to October 1979. Results show that orbital data is not only useful in characterizing the growth of wheat, but also provides information of the intensity and extent of adverse climate which affects cultivation. The temporal and spatial characteristics of LANDSAR data are also demonstrated.

  11. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  12. Automatic analysis and classification of surface electromyography.

    PubMed

    Abou-Chadi, F E; Nashar, A; Saad, M

    2001-01-01

    In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.

  13. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P. (Principal Investigator); Gourinard, Y.; Cambou, F.

    1972-01-01

    The author has identified the following significant results. Return beam vidicon and multispectral band scanner imagery will be correlated with existing vegetation and geologic maps of southern France and northern Spain to develop correspondence codes between map units and space data. Microclimate data from six stations, spectral measurements from a few meters to 2 km using ERTS-type filter and spectrometers, and leaf reflectance measurements will be obtained to assist in correlation studies.

  14. Anomaly Detection for Beam Loss Maps in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Bruce, Roderik; Redaelli, Stefano; Rossi, Roberto; Theodoropoulos, Panagiotis; Jaster-Merz, Sonja

    2017-07-01

    In the LHC, beam loss maps are used to validate collimator settings for cleaning and machine protection. This is done by monitoring the loss distribution in the ring during infrequent controlled loss map campaigns, as well as in standard operation. Due to the complexity of the system, consisting of more than 50 collimators per beam, it is difficult to identify small changes in the collimation hierarchy, which may be due to setting errors or beam orbit drifts with such methods. A technique based on Principal Component Analysis and Local Outlier Factor is presented to detect anomalies in the loss maps and therefore provide an automatic check of the collimation hierarchy.

  15. Automatic detection and decoding of honey bee waggle dances.

    PubMed

    Wario, Fernando; Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.

  16. Project of Near-Real-Time Generation of ShakeMaps and a New Hazard Map in Austria

    NASA Astrophysics Data System (ADS)

    Jia, Yan; Weginger, Stefan; Horn, Nikolaus; Hausmann, Helmut; Lenhardt, Wolfgang

    2016-04-01

    Target-orientated prevention and effective crisis management can reduce or avoid damage and save lives in case of a strong earthquake. To achieve this goal, a project for automatic generated ShakeMaps (maps of ground motion and shaking intensity) and updating the Austrian hazard map was started at ZAMG (Zentralanstalt für Meteorologie und Geodynamik) in 2015. The first goal of the project is set for a near-real-time generation of ShakeMaps following strong earthquakes in Austria to provide rapid, accurate and official information to support the governmental crisis management. Using newly developed methods and software by SHARE (Seismic Hazard Harmonization in Europe) and GEM (Global Earthquake Model), which allows a transnational analysis at European level, a new generation of Austrian hazard maps will be ultimately calculated. More information and a status of our project will be given by this presentation.

  17. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    PubMed

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  18. GenomeVx: simple web-based creation of editable circular chromosome maps.

    PubMed

    Conant, Gavin C; Wolfe, Kenneth H

    2008-03-15

    We describe GenomeVx, a web-based tool for making editable, publication-quality, maps of mitochondrial and chloroplast genomes and of large plasmids. These maps show the location of genes and chromosomal features as well as a position scale. The program takes as input either raw feature positions or GenBank records. In the latter case, features are automatically extracted and colored, an example of which is given. Output is in the Adobe Portable Document Format (PDF) and can be edited by programs such as Adobe Illustrator. GenomeVx is available at http://wolfe.gen.tcd.ie/GenomeVx

  19. Automatic intersection map generation task 10 report.

    DOT National Transportation Integrated Search

    2016-02-29

    This report describes the work conducted in Task 10 of the V2I Safety Applications Development Project. The work was performed by the University of Michigan Transportation Research Institute (UMTRI) under contract to the Crash Avoidance Metrics Partn...

  20. Mission Assurance: Issues and Challenges

    DTIC Science & Technology

    2010-07-15

    JFQ), Summer 1995. [9] Alberts , C.J. & Dorofee, A.J., “Mission Assurance Analysis Protocol (MAAP): Assessing Risk in Complex Environments... CAMUS : Automatically Mapping Cyber Assets to Missions and Users,” Proc. of the 2010 Military Communications Conference (MILCOM 2009), 2009. [23

  1. TU-F-BRF-03: Effect of Radiation Therapy Planning Scan Registration On the Dose in Lung Cancer Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, A; Contee, C; White, B

    Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less

  2. Observed and forecast flood-inundation mapping application-A pilot study of an eleven-mile reach of the White River, Indianapolis, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Morlock, Scott E.; Arihood, Leslie D.; Kiesler, James L.

    2011-01-01

    Near-real-time and forecast flood-inundation mapping products resulted from a pilot study for an 11-mile reach of the White River in Indianapolis. The study was done by the U.S. Geological Survey (USGS), Indiana Silver Jackets hazard mitigation taskforce members, the National Weather Service (NWS), the Polis Center, and Indiana University, in cooperation with the City of Indianapolis, the Indianapolis Museum of Art, the Indiana Department of Homeland Security, and the Indiana Department of Natural Resources, Division of Water. The pilot project showed that it is technically feasible to create a flood-inundation map library by means of a two-dimensional hydraulic model, use a map from the library to quickly complete a moderately detailed local flood-loss estimate, and automatically run the hydraulic model during a flood event to provide the maps and flood-damage information through a Web graphical user interface. A library of static digital flood-inundation maps was created by means of a calibrated two-dimensional hydraulic model. Estimated water-surface elevations were developed for a range of river stages referenced to a USGS streamgage and NWS flood forecast point colocated within the study reach. These maps were made available through the Internet in several formats, including geographic information system, Keyhole Markup Language, and Portable Document Format. A flood-loss estimate was completed for part of the study reach by using one of the flood-inundation maps from the static library. The Federal Emergency Management Agency natural disaster-loss estimation program HAZUS-MH, in conjunction with local building information, was used to complete a level 2 analysis of flood-loss estimation. A Service-Oriented Architecture-based dynamic flood-inundation application was developed and was designed to start automatically during a flood, obtain near real-time and forecast data (from the colocated USGS streamgage and NWS flood forecast point within the study reach), run the two-dimensional hydraulic model, and produce flood-inundation maps. The application used local building data and depth-damage curves to estimate flood losses based on the maps, and it served inundation maps and flood-loss estimates through a Web-based graphical user interface.

  3. Augmented paper maps: Exploring the design space of a mixed reality system

    NASA Astrophysics Data System (ADS)

    Paelke, Volker; Sester, Monika

    Paper maps and mobile electronic devices have complementary strengths and shortcomings in outdoor use. In many scenarios, like small craft sailing or cross-country trekking, a complete replacement of maps is neither useful nor desirable. Paper maps are fail-safe, relatively cheap, offer superior resolution and provide large scale overview. In uses like open-water sailing it is therefore mandatory to carry adequate maps/charts. GPS based mobile devices, on the other hand, offer useful features like automatic positioning and plotting, real-time information update and dynamic adaptation to user requirements. While paper maps are now commonly used in combination with mobile GPS devices, there is no meaningful integration between the two, and the combined use leads to a number of interaction problems and potential safety issues. In this paper we explore the design space of augmented paper maps in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths.

  4. Potential Energy Surface-Based Automatic Deduction of Conformational Transition Networks and Its Application on Quantum Mechanical Landscapes of d-Glucose Conformers.

    PubMed

    Satoh, Hiroko; Oda, Tomohiro; Nakakoji, Kumiyo; Uno, Takeaki; Tanaka, Hiroaki; Iwata, Satoru; Ohno, Koichi

    2016-11-08

    This paper describes our approach that is built upon the potential energy surface (PES)-based conformational analysis. This approach automatically deduces a conformational transition network, called a conformational reaction route map (r-map), by using the Scaled Hypersphere Search of the Anharmonic Downward Distortion Following method (SHS-ADDF). The PES-based conformational search has been achieved by using large ADDF, which makes it possible to trace only low transition state (TS) barriers while restraining bond lengths and structures with high free energy. It automatically performs sampling the minima and TS structures by simply taking into account the mathematical feature of PES without requiring any a priori specification of variable internal coordinates. An obtained r-map is composed of equilibrium (EQ) conformers connected by reaction routes via TS conformers, where all of the reaction routes are already confirmed during the process of the deduction using the intrinsic reaction coordinate (IRC) method. The postcalculation analysis of the deduced r-map is interactively carried out using the RMapViewer software we have developed. This paper presents computational details of the PES-based conformational analysis and its application to d-glucose. The calculations have been performed for an isolated glucose molecule in the gas phase at the RHF/6-31G level. The obtained conformational r-map for α-d-glucose is composed of 201 EQ and 435 TS conformers and that for β-d-glucose is composed of 202 EQ and 371 TS conformers. For the postcalculation analysis of the conformational r-maps by using the RMapViewer software program we have found multiple minimum energy paths (MEPs) between global minima of 1 C 4 and 4 C 1 chair conformations. The analysis using RMapViewer allows us to confirm the thermodynamic and kinetic predominance of 4 C 1 conformations; that is, the potential energy of the global minimum of 4 C 1 is lower than that of 1 C 4 (thermodynamic predominance) and that the highest energy of those of all the TS structures along a route from 4 C 1 to 1 C 4 is lower than that of 1 C 4 to 4 C 1 (kinetic predominance).

  5. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    PubMed

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  6. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  7. A system of regional agricultural land use mapping tested against small scale Apollo 9 color infrared photography of the Imperial Valley (California)

    USGS Publications Warehouse

    Johnson, Claude W.; Browden, Leonard W.; Pease, Robert W.

    1969-01-01

    Interpretation results of the small scale ClR photography of the Imperial Valley (California) taken on March 12, 1969 by the Apollo 9 earth orbiting satellite have shown that world wide agricultural land use mapping can be accomplished from satellite ClR imagery if sufficient a priori information is available for the region being mapped. Correlation of results with actual data is encouraging although the accuracy of identification of specific crops from the single image is poor. The poor results can be partly attributed to only one image taken during mid-season when the three major crops were reflecting approximately the same and their ClR image appears to indicate the same crop type. However, some incapacity can be attributed to lack of understanding of the subtle variations of visual and infrared color reflectance of vegetation and surrounding environment. Analysis of integrated color variations of the vegetation and background environment recorded on ClR imagery is discussed. Problems associated with the color variations may be overcome by development of a semi-automatic processing system which considers individual field units or cells. Design criteria for semi-automatic processing system are outlined.

  8. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  9. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  10. Standardized unfold mapping: a technique to permit left atrial regional data display and analysis.

    PubMed

    Williams, Steven E; Tobon-Gomez, Catalina; Zuluaga, Maria A; Chubb, Henry; Butakoff, Constantine; Karim, Rashed; Ahmed, Elena; Camara, Oscar; Rhode, Kawal S

    2017-10-01

    Left atrial arrhythmia substrate assessment can involve multiple imaging and electrical modalities, but visual analysis of data on 3D surfaces is time-consuming and suffers from limited reproducibility. Unfold maps (e.g., the left ventricular bull's eye plot) allow 2D visualization, facilitate multimodal data representation, and provide a common reference space for inter-subject comparison. The aim of this work is to develop a method for automatic representation of multimodal information on a left atrial standardized unfold map (LA-SUM). The LA-SUM technique was developed and validated using 18 electroanatomic mapping (EAM) LA geometries before being applied to ten cardiac magnetic resonance/EAM paired geometries. The LA-SUM was defined as an unfold template of an average LA mesh, and registration of clinical data to this mesh facilitated creation of new LA-SUMs by surface parameterization. The LA-SUM represents 24 LA regions on a flattened surface. Intra-observer variability of LA-SUMs for both EAM and CMR datasets was minimal; root-mean square difference of 0.008 ± 0.010 and 0.007 ± 0.005 ms (local activation time maps), 0.068 ± 0.063 gs (force-time integral maps), and 0.031 ± 0.026 (CMR LGE signal intensity maps). Following validation, LA-SUMs were used for automatic quantification of post-ablation scar formation using CMR imaging, demonstrating a weak but significant relationship between ablation force-time integral and scar coverage (R 2  = 0.18, P < 0.0001). The proposed LA-SUM displays an integrated unfold map for multimodal information. The method is applicable to any LA surface, including those derived from imaging and EAM systems. The LA-SUM would facilitate standardization of future research studies involving segmental analysis of the LA.

  11. Fragman: an R package for fragment analysis.

    PubMed

    Covarrubias-Pazaran, Giovanny; Diaz-Garcia, Luis; Schlautman, Brandon; Salazar, Walter; Zalapa, Juan

    2016-04-21

    Determination of microsatellite lengths or other DNA fragment types is an important initial component of many genetic studies such as mutation detection, linkage and quantitative trait loci (QTL) mapping, genetic diversity, pedigree analysis, and detection of heterozygosity. A handful of commercial and freely available software programs exist for fragment analysis; however, most of them are platform dependent and lack high-throughput applicability. We present the R package Fragman to serve as a freely available and platform independent resource for automatic scoring of DNA fragment lengths diversity panels and biparental populations. The program analyzes DNA fragment lengths generated in Applied Biosystems® (ABI) either manually or automatically by providing panels or bins. The package contains additional tools for converting the allele calls to GenAlEx, JoinMap® and OneMap software formats mainly used for genetic diversity and generating linkage maps in plant and animal populations. Easy plotting functions and multiplexing friendly capabilities are some of the strengths of this R package. Fragment analysis using a unique set of cranberry (Vaccinium macrocarpon) genotypes based on microsatellite markers is used to highlight the capabilities of Fragman. Fragman is a valuable new tool for genetic analysis. The package produces equivalent results to other popular software for fragment analysis while possessing unique advantages and the possibility of automation for high-throughput experiments by exploiting the power of R.

  12. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  13. Crowd-sourced data collection to support automatic classification of building footprint data

    NASA Astrophysics Data System (ADS)

    Hecht, Robert; Kalla, Matthias; Krüger, Tobias

    2018-05-01

    Human settlements are mainly formed by buildings with their different characteristics and usage. Despite the importance of buildings for the economy and society, complete regional or even national figures of the entire building stock and its spatial distribution are still hardly available. Available digital topographic data sets created by National Mapping Agencies or mapped voluntarily through a crowd via Volunteered Geographic Information (VGI) platforms (e.g. OpenStreetMap) contain building footprint information but often lack additional information on building type, usage, age or number of floors. For this reason, predictive modeling is becoming increasingly important in this context. The capabilities of machine learning allow for the prediction of building types and other building characteristics and thus, the efficient classification and description of the entire building stock of cities and regions. However, such data-driven approaches always require a sufficient amount of ground truth (reference) information for training and validation. The collection of reference data is usually cost-intensive and time-consuming. Experiences from other disciplines have shown that crowdsourcing offers the possibility to support the process of obtaining ground truth data. Therefore, this paper presents the results of an experimental study aiming at assessing the accuracy of non-expert annotations on street view images collected from an internet crowd. The findings provide the basis for a future integration of a crowdsourcing component into the process of land use mapping, particularly the automatic building classification.

  14. Digital Map Requirements For Automatic Vehicle Location

    DOT National Transportation Integrated Search

    1998-12-01

    New Jersey Transit (NJT) is currently investigating acquisition of an automated vehicle locator (AVL) system. The purpose of the AVL system is to monitor the location of buses. Knowing the location of a bus enables the agency to manage the bus fleet ...

  15. The Greek National Observatory of Forest Fires (NOFFi)

    NASA Astrophysics Data System (ADS)

    Tompoulidou, Maria; Stefanidou, Alexandra; Grigoriadis, Dionysios; Dragozi, Eleni; Stavrakoudis, Dimitris; Gitas, Ioannis Z.

    2016-08-01

    Efficient forest fire management is a key element for alleviating the catastrophic impacts of wildfires. Overall, the effective response to fire events necessitates adequate planning and preparedness before the start of the fire season, as well as quantifying the environmental impacts in case of wildfires. Moreover, the estimation of fire danger provides crucial information required for the optimal allocation and distribution of the available resources. The Greek National Observatory of Forest Fires (NOFFi)—established by the Greek Forestry Service in collaboration with the Laboratory of Forest Management and Remote Sensing of the Aristotle University of Thessaloniki and the International Balkan Center—aims to develop a series of modern products and services for supporting the efficient forest fire prevention management in Greece and the Balkan region, as well as to stimulate the development of transnational fire prevention and impacts mitigation policies. More specifically, NOFFi provides three main fire-related products and services: a) a remote sensing-based fuel type mapping methodology, b) a semi-automatic burned area mapping service, and c) a dynamically updatable fire danger index providing mid- to long-term predictions. The fuel type mapping methodology was developed and applied across the country, following an object-oriented approach and using Landsat 8 OLI satellite imagery. The results showcase the effectiveness of the generated methodology in obtaining highly accurate fuel type maps on a national level. The burned area mapping methodology was developed as a semi-automatic object-based classification process, carefully crafted to minimize user interaction and, hence, be easily applicable on a near real-time operational level as well as for mapping historical events. NOFFi's products can be visualized through the interactive Fire Forest portal, which allows the involvement and awareness of the relevant stakeholders via the Public Participation GIS (PPGIS) tool.

  16. Automated analysis of individual sperm cells using stain-free interferometric phase microscopy and machine learning.

    PubMed

    Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T

    2017-09-01

    Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  17. Arraycount, an algorithm for automatic cell counting in microwell arrays.

    PubMed

    Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali

    2009-09-01

    Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.

  18. Optimal guidance with obstacle avoidance for nap-of-the-earth flight

    NASA Technical Reports Server (NTRS)

    Pekelsma, Nicholas J.

    1988-01-01

    The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.

  19. Interoperability of Medication Classification Systems: Lessons Learned Mapping Established Pharmacologic Classes (EPCs) to SNOMED CT

    PubMed Central

    Nelson, Scott D; Parker, Jaqui; Lario, Robert; Winnenburg, Rainer; Erlbaum, Mark S.; Lincoln, Michael J.; Bodenreider, Olivier

    2018-01-01

    Interoperability among medication classification systems is known to be limited. We investigated the mapping of the Established Pharmacologic Classes (EPCs) to SNOMED CT. We compared lexical and instance-based methods to an expert-reviewed reference standard to evaluate contributions of these methods. Of the 543 EPCs, 284 had an equivalent SNOMED CT class, 205 were more specific, and 54 could not be mapped. Precision, recall, and F1 score were 0.416, 0.620, and 0.498 for lexical mapping and 0.616, 0.504, and 0.554 for instance-based mapping. Each automatic method has strengths, weaknesses, and unique contributions in mapping between medication classification systems. In our experience, it was beneficial to consider the mapping provided by both automated methods for identifying potential matches, gaps, inconsistencies, and opportunities for quality improvement between classifications. However, manual review by subject matter experts is still needed to select the most relevant mappings. PMID:29295234

  20. Characterizing semantic mappings adaptation via biomedical KOS evolution: a case study investigating SNOMED CT and ICD.

    PubMed

    Dos Reis, Julio Cesar; Pruski, Cédric; Da Silveira, Marcos; Reynaud-Delaître, Chantal

    2013-01-01

    Mappings established between Knowledge Organization Systems (KOS) increase semantic interoperability between biomedical information systems. However, biomedical knowledge is highly dynamic and changes affecting KOS entities can potentially invalidate part or the totality of existing mappings. Understanding how mappings evolve and what the impacts of KOS evolution on mappings are is therefore crucial for the definition of an automatic approach to maintain mappings valid and up-to-date over time. In this article, we study variations of a specific KOS complex change (split) for two biomedical KOS (SNOMED CT and ICD-9-CM) through a rigorous method of investigation for identifying and refining complex changes, and for selecting representative cases. We empirically analyze and explain their influence on the evolution of associated mappings. Results point out the importance of considering various dimensions of the information described in KOS, like the semantic structure of concepts, the set of relevant information used to define the mappings and the change operations interfering with this set of information.

  1. Characterizing Semantic Mappings Adaptation via Biomedical KOS Evolution: A Case Study Investigating SNOMED CT and ICD

    PubMed Central

    Reis, Julio Cesar Dos; Pruski, Cédric; Da Silveira, Marcos; Reynaud-Delaître, Chantal

    2013-01-01

    Mappings established between Knowledge Organization Systems (KOS) increase semantic interoperability between biomedical information systems. However, biomedical knowledge is highly dynamic and changes affecting KOS entities can potentially invalidate part or the totality of existing mappings. Understanding how mappings evolve and what the impacts of KOS evolution on mappings are is therefore crucial for the definition of an automatic approach to maintain mappings valid and up-to-date over time. In this article, we study variations of a specific KOS complex change (split) for two biomedical KOS (SNOMED CT and ICD-9-CM) through a rigorous method of investigation for identifying and refining complex changes, and for selecting representative cases. We empirically analyze and explain their influence on the evolution of associated mappings. Results point out the importance of considering various dimensions of the information described in KOS, like the semantic structure of concepts, the set of relevant information used to define the mappings and the change operations interfering with this set of information. PMID:24551341

  2. Real-time mapping of the corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy

    NASA Astrophysics Data System (ADS)

    Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver

    2010-02-01

    The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.

  3. Experiments to Distribute Map Generalization Processes

    NASA Astrophysics Data System (ADS)

    Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas

    2018-05-01

    Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.

  4. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  5. Mapping and localization for extraterrestrial robotic explorations

    NASA Astrophysics Data System (ADS)

    Xu, Fengliang

    In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)

  6. On the automaticity of response inhibition in individuals with alcoholism.

    PubMed

    Noël, Xavier; Brevers, Damien; Hanak, Catherine; Kornreich, Charles; Verbanck, Paul; Verbruggen, Frederick

    2016-06-01

    Response inhibition is usually considered a hallmark of executive control. However, recent work indicates that stop performance can become associatively mediated ('automatic') over practice. This study investigated automatic response inhibition in sober and recently detoxified individuals with alcoholism.. We administered to forty recently detoxified alcoholics and forty healthy participants a modified stop-signal task that consisted of a training phase in which a subset of the stimuli was consistently associated with stopping or going, and a test phase in which this mapping was reversed. In the training phase, stop performance improved for the consistent stop stimuli, compared with control stimuli that were not associated with going or stopping. In the test phase, go performance tended to be impaired for old stop stimuli. Combined, these findings support the automatic inhibition hypothesis. Importantly, performance was similar in both groups, which indicates that automatic inhibitory control develops normally in individuals with alcoholism.. This finding is specific to individuals with alcoholism without other psychiatric disorders, which is rather atypical and prevents generalization. Personalized stimuli with a stronger affective content should be used in future studies. These results advance our understanding of behavioral inhibition in individuals with alcoholism. Furthermore, intact automatic inhibitory control may be an important element of successful cognitive remediation of addictive behaviors.. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Cadastral Map Assembling Using Generalized Hough Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    There are numerous cadastral maps generated by the past land surveying. The raster digitization of these paper maps is in progress. For effective and efficient use of these maps, we have to assemble the set of maps to make them superimposable on other geographic information in a GIS. The problem can be seen as a complex jigsaw puzzle where the pieces are the cadastral sections extracted from the map. We present an automatic solution to this geographic jigsaw puzzle, based on the generalized Hough transformation that detects the longest common boundary between every piece and its neighbors. The experiments have been conducted using the map of Mie Prefecture, Japan and the French cadastral map. The results of the experiments with the French cadastral maps showed that the proposed method, which consists of a flood filling procedure of internal area and detection and normalization of the north arrow direction, is suitable for assembling the cadastral map. The final goal of the process is to integrate every piece of the puzzle into a national geographic reference frame and database.

  8. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  9. Real-time forecasts of tomorrow's earthquakes in California: a new mapping tool

    USGS Publications Warehouse

    Gerstenberger, Matt; Wiemer, Stefan; Jones, Lucy

    2004-01-01

    We have derived a multi-model approach to calculate time-dependent earthquake hazard resulting from earthquake clustering. This file report explains the theoretical background behind the approach, the specific details that are used in applying the method to California, as well as the statistical testing to validate the technique. We have implemented our algorithm as a real-time tool that has been automatically generating short-term hazard maps for California since May of 2002, at http://step.wr.usgs.gov

  10. Wetlands delineation by spectral signature analysis and legal implications

    NASA Technical Reports Server (NTRS)

    Anderon, R. R.; Carter, V.

    1972-01-01

    High altitude analysis of wetland resources and the use of such information in an operational mode to address specific problems of wetland preservation at a state level are discussed. Work efforts were directed toward: (1) developing techniques for using large scale color IR photography in state wetlands mapping program, (2) developing methods for obtaining wetlands ecology information from high altitude photography, (3) developing means by which spectral data can be more accurately analyzed visually, and (4) developing spectral data for automatic mapping of wetlands.

  11. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  12. Mapping soil types from multispectral scanner data.

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Zachary, A. L.

    1971-01-01

    Multispectral remote sensing and computer-implemented pattern recognition techniques were used for automatic ?mapping' of soil types. This approach involves subjective selection of a set of reference samples from a gray-level display of spectral variations which was generated by a computer. Each resolution element is then classified using a maximum likelihood ratio. Output is a computer printout on which the researcher assigns a different symbol to each class. Four soil test areas in Indiana were experimentally examined using this approach, and partially successful results were obtained.

  13. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  14. Department of Defense Strategy to Support Multi-Agency Bat Conservation Initiative within the State of Utah

    DTIC Science & Technology

    2008-02-28

    Range, and Section are entered. Datum: Geometric reference surface. Original Site Location datum is defined by user’s map datum; e.g. NAD27...Section are entered. Datum: Geometric reference surface. Original Site Location datum is defined by user’s map datum; e.g. NAD27 Conus or NAD83...Calculated and recorded automatically if the fields UTM_N and UTM_E or Township, Range, and Section are entered. 41 Datum: Geometric reference surface

  15. Application of LANDSAT images in the Minas Gerais tectonic division

    NASA Technical Reports Server (NTRS)

    Dacunha, R. P.; Demattos, J. T.

    1978-01-01

    The interpretation of LANDSAT data for a regional geological investigation of Brazil is provided. Radar imagery, aerial photographs and aeromagnetic maps were also used. Automatic interpretation, using LANDSAT OCT's was carried out by the 1-100 equipment. As a primary result a tectonic map was obtained, at 1:1,000,000 scale, of an area of about 143,000 square kilometers, in the central portion of Minas Gerais and Eastern Goias States, known as regions potentially rich in mineral resources.

  16. Comparison of SAM and OBIA as Tools for Lava Morphology Classification - A Case Study in Krafla, NE Iceland

    NASA Astrophysics Data System (ADS)

    Aufaristama, Muhammad; Hölbling, Daniel; Höskuldsson, Ármann; Jónsdóttir, Ingibjörg

    2017-04-01

    The Krafla volcanic system is part of the Icelandic North Volcanic Zone (NVZ). During Holocene, two eruptive events occurred in Krafla, 1724-1729 and 1975-1984. The last eruptive episode (1975-1984), known as the "Krafla Fires", resulted in nine volcanic eruption episodes. The total area covered by the lavas from this eruptive episode is 36 km2 and the volume is about 0.25-0.3 km3. Lava morphology is related to the characteristics of the surface morphology of a lava flow after solidification. The typical morphology of lava can be used as primary basis for the classification of lava flows when rheological properties cannot be directly observed during emplacement, and also for better understanding the behavior of lava flow models. Although mapping of lava flows in the field is relatively accurate such traditional methods are time consuming, especially when the lava covers large areas such as it is the case in Krafla. Semi-automatic mapping methods that make use of satellite remote sensing data allow for an efficient and fast mapping of lava morphology. In this study, two semi-automatic methods for lava morphology classification are presented and compared using Landsat 8 (30 m spatial resolution) and SPOT-5 (10 m spatial resolution) satellite images. For assessing the classification accuracy, the results from semi-automatic mapping were compared to the respective results from visual interpretation. On the one hand, the Spectral Angle Mapper (SAM) classification method was used. With this method an image is classified according to the spectral similarity between the image reflectance spectrums and the reference reflectance spectra. SAM successfully produced detailed lava surface morphology maps. However, the pixel-based approach partly leads to a salt-and-pepper effect. On the other hand, we applied the Random Forest (RF) classification method within an object-based image analysis (OBIA) framework. This statistical classifier uses a randomly selected subset of training samples to produce multiple decision trees. For final classification of pixels or - in the present case - image objects, the average of the class assignments probability predicted by the different decision trees is used. While the resulting OBIA classification of lava morphology types shows a high coincidence with the reference data, the approach is sensitive to the segmentation-derived image objects that constitute the base units for classification. Both semi-automatic methods produce reasonable results in the Krafla lava field, even if the identification of different pahoehoe and aa types of lava appeared to be difficult. The use of satellite remote sensing data shows a high potential for fast and efficient classification of lava morphology, particularly over large and inaccessible areas.

  17. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  18. Bridging data models and terminologies to support adverse drug event reporting using EHR data.

    PubMed

    Declerck, G; Hussain, S; Daniel, C; Yuksel, M; Laleci, G B; Twagirumukiza, M; Jaulent, M-C

    2015-01-01

    This article is part of the Focus Theme of METHODs of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". SALUS project aims at building an interoperability platform and a dedicated toolkit to enable secondary use of electronic health records (EHR) data for post marketing drug surveillance. An important component of this toolkit is a drug-related adverse events (AE) reporting system designed to facilitate and accelerate the reporting process using automatic prepopulation mechanisms. To demonstrate SALUS approach for establishing syntactic and semantic interoperability for AE reporting. Standard (e.g. HL7 CDA-CCD) and proprietary EHR data models are mapped to the E2B(R2) data model via SALUS Common Information Model. Terminology mapping and terminology reasoning services are designed to ensure the automatic conversion of source EHR terminologies (e.g. ICD-9-CM, ICD-10, LOINC or SNOMED-CT) to the target terminology MedDRA which is expected in AE reporting forms. A validated set of terminology mappings is used to ensure the reliability of the reasoning mechanisms. The percentage of data elements of a standard E2B report that can be completed automatically has been estimated for two pilot sites. In the best scenario (i.e. the available fields in the EHR have actually been filled), only 36% (pilot site 1) and 38% (pilot site 2) of E2B data elements remain to be filled manually. In addition, most of these data elements shall not be filled in each report. SALUS platform's interoperability solutions enable partial automation of the AE reporting process, which could contribute to improve current spontaneous reporting practices and reduce under-reporting, which is currently one major obstacle in the process of acquisition of pharmacovigilance data.

  19. Automatic detection and decoding of honey bee waggle dances

    PubMed Central

    Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer’s movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system’s performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance. PMID:29236712

  20. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  1. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  2. Georectification and snow classification of webcam images: potential for complementing satellite-derrived snow maps over Switzerland

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2016-04-01

    The spatial and temporal variability of snow cover has a significant impact on climate and environment and is of great socio-economic importance for the European Alps. Satellite remote sensing data is widely used to study snow cover variability and can provide spatially comprehensive information on snow cover extent. However, cloud cover strongly impedes the surface view and hence limits the number of useful snow observations. Outdoor webcam images not only offer unique potential for complementing satellite-derived snow retrieval under cloudy conditions but could also serve as a reference for improved validation of satellite-based approaches. Thousands of webcams are currently connected to the Internet and deliver freely available images with high temporal and spatial resolutions. To exploit the untapped potential of these webcams, a semi-automatic procedure was developed to generate snow cover maps based on webcam images. We used daily webcam images of the Swiss alpine region to apply, improve, and extend existing approaches dealing with the positioning of photographs within a terrain model, appropriate georectification, and the automatic snow classification of such photographs. In this presentation, we provide an overview of the implemented procedure and demonstrate how our registration approach automatically resolves the orientation of a webcam by using a high-resolution digital elevation model and the webcam's position. This allows snow-classified pixels of webcam images to be related to their real-world coordinates. We present several examples of resulting snow cover maps, which have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or not visible from webcams' positions. The procedure is expected to work under almost any weather condition and demonstrates the feasibility of using webcams for the retrieval of high-resolution snow cover information.

  3. Automatic violence detection in digital movies

    NASA Astrophysics Data System (ADS)

    Fischer, Stephan

    1996-11-01

    Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.

  4. Using novel acoustic and visual mapping tools to predict the small-scale spatial distribution of live biogenic reef framework in cold-water coral habitats

    NASA Astrophysics Data System (ADS)

    De Clippele, L. H.; Gafeira, J.; Robert, K.; Hennige, S.; Lavaleye, M. S.; Duineveld, G. C. A.; Huvenne, V. A. I.; Roberts, J. M.

    2017-03-01

    Cold-water corals form substantial biogenic habitats on continental shelves and in deep-sea areas with topographic highs, such as banks and seamounts. In the Atlantic, many reef and mound complexes are engineered by Lophelia pertusa, the dominant framework-forming coral. In this study, a variety of mapping approaches were used at a range of scales to map the distribution of both cold-water coral habitats and individual coral colonies at the Mingulay Reef Complex (west Scotland). The new ArcGIS-based British Geological Survey (BGS) seabed mapping toolbox semi-automatically delineated over 500 Lophelia reef `mini-mounds' from bathymetry data with 2-m resolution. The morphometric and acoustic characteristics of the mini-mounds were also automatically quantified and captured using this toolbox. Coral presence data were derived from high-definition remotely operated vehicle (ROV) records and high-resolution microbathymetry collected by a ROV-mounted multibeam echosounder. With a resolution of 0.35 × 0.35 m, the microbathymetry covers 0.6 km2 in the centre of the study area and allowed identification of individual live coral colonies in acoustic data for the first time. Maximum water depth, maximum rugosity, mean rugosity, bathymetric positioning index and maximum current speed were identified as the environmental variables that contributed most to the prediction of live coral presence. These variables were used to create a predictive map of the likelihood of presence of live cold-water coral colonies in the area of the Mingulay Reef Complex covered by the 2-m resolution data set. Predictive maps of live corals across the reef will be especially valuable for future long-term monitoring surveys, including those needed to understand the impacts of global climate change. This is the first study using the newly developed BGS seabed mapping toolbox and an ROV-based microbathymetric grid to explore the environmental variables that control coral growth on cold-water coral reefs.

  5. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  6. WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Y; Lian, J; Chen, R

    Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less

  7. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  8. Semantic Segmentation and Unregistered Building Detection from Uav Images Using a Deconvolutional Network

    NASA Astrophysics Data System (ADS)

    Ham, S.; Oh, Y.; Choi, K.; Lee, I.

    2018-05-01

    Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.

  9. The role of surgery in the treatment of post-infarction ventricular tachycardia. A 5 year experience.

    PubMed

    Martinelli, L; Goggi, C; Graffigna, A; Salerno, J A; Chimienti, M; Klersy, C; Viganò, M

    1987-01-01

    The purpose of this report is to present a 5 year experience in electrophysiologically guided surgical treatment of post-infarction ventricular tachycardia (VT) in a consecutive series of 39 patients. In every case the arrhythmia was not responsive to pluripharmacological therapy. The diagnostic steps included preoperative endocardial, intraoperative epi- and endocardial mapping, automatically carried out when possible. Surgical techniques were: classic Guiraudon's encircling endocardial ventriculotomy (EEV), partial EEV, endocardial resection (ER), cryoablation or combined procedures. The hospital mortality was of 4 patients (10%). During the follow-up period (1-68 mo), 4 patients (11%) died of cardiac non-VT related causes. Among the survivors, 90% are in sinus rhythm. The authors consider electrophysiologically guided surgery a safe and reliable method for the treatment of post-infarction VT and suggest more extensive indications. They stress the importance of automatic mapping in pleomorphic and non-sustained VT, and the necessity of tailoring the surgical technique to the characteristics of each case.

  10. Ventricular tachycardia in post-myocardial infarction patients. Results of surgical therapy.

    PubMed

    Viganò, M; Martinelli, L; Salerno, J A; Minzioni, G; Chimienti, M; Graffigna, A; Goggi, C; Klersy, C; Montemartini, C

    1986-05-01

    This report addresses the problems related to surgical treatment of post-infarction ventricular tachycardia (VT) and is based on a 5 year experience of 36 consecutive patients. In every case the arrhythmia was unresponsive to pharmacological therapy. All patients were operated on after the completion of a diagnostic protocol including preoperative endocardial, intra-operative epi-endocardial mapping, the latter performed automatically when possible. Surgical techniques were: classical Guiraudon's encircling endocardial ventriculotomy (EEV); partial EEV, endocardial resection (ER); cryoablation or a combination of these procedures. The in-hospital mortality (30 days) was 8.3% (3 patients). During the follow-up period (1-68 months), 3 patients (9%) died of cardiac but not VT related causes. Of the survivors, 92% are VT-free. We consider electrophysiologically guided surgery a safe and reliable method for the treatment of post-infarction VT and suggest its more extensive use. We stress the importance of automatic mapping in pleomorphic and non-sustained VT, and the necessity of tailoring the surgical technique to the characteristics of each case.

  11. Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area

    NASA Astrophysics Data System (ADS)

    Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.

    2018-04-01

    Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.

  12. Geometrical and topological issues in octree based automatic meshing

    NASA Technical Reports Server (NTRS)

    Saxena, Mukul; Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  13. Octree based automatic meshing from CSG models

    NASA Technical Reports Server (NTRS)

    Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is emphasized. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary respresentation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractors. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  14. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less

  15. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    PubMed Central

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945

  16. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.

    PubMed

    Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz

    2012-09-24

    The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.

  17. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  18. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.

  19. Biomedical Terminology Mapper for UML projects.

    PubMed

    Thibault, Julien C; Frey, Lewis

    2013-01-01

    As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.

  20. Biomedical Terminology Mapper for UML projects

    PubMed Central

    Thibault, Julien C.; Frey, Lewis

    As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies. PMID:24303278

  1. Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.

    PubMed

    Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen

    2015-09-11

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.

  2. Automatic Text Decomposition and Structuring.

    ERIC Educational Resources Information Center

    Salton, Gerard; And Others

    1996-01-01

    Text similarity measurements are used to determine relationships between natural-language texts and text excerpts. The resulting linked hypertext maps can be broken down into text segments and themes used to identify different text types and structures, leading to improved information access and utilization. Examples are provided for text…

  3. Pavement type and wear condition classification from tire cavity acoustic measurements with artificial neural networks.

    PubMed

    Masino, Johannes; Foitzik, Michael-Jan; Frey, Michael; Gauterin, Frank

    2017-06-01

    Tire road noise is the major contributor to traffic noise, which leads to general annoyance, speech interference, and sleep disturbances. Standardized methods to measure tire road noise are expensive, sophisticated to use, and they cannot be applied comprehensively. This paper presents a method to automatically classify different types of pavement and the wear condition to identify noisy road surfaces. The methods are based on spectra of time series data of the tire cavity sound, acquired under normal vehicle operation. The classifier, an artificial neural network, correctly predicts three pavement types, whereas there are few bidirectional mis-classifications for two pavements, which have similar physical characteristics. The performance measures of the classifier to predict a new or worn out condition are over 94.6%. One could create a digital map with the output of the presented method. On the basis of these digital maps, road segments with a strong impact on tire road noise could be automatically identified. Furthermore, the method can estimate the road macro-texture, which has an impact on the tire road friction especially on wet conditions. Overall, this digital map would have a great benefit for civil engineering departments, road infrastructure operators, and for advanced driver assistance systems.

  4. Visual Uav Trajectory Plan System Based on Network Map

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Lin, Z. J.; Su, G. Z.; Wu, B. Y.

    2012-07-01

    The base map of the current software UP-30 using in trajectory plan for Unmanned Aircraft Vehicle is vector diagram. UP-30 draws navigation points manually. But in the field of operation process, the efficiency and the quality of work is influenced because of insufficient information, screen reflection, calculate inconveniently and other factors. If we do this work in indoor, the effect of external factors on the results would be eliminated, the network earth users can browse the free world high definition satellite images through downloading a client software, and can export the high resolution image by standard file format. This brings unprecedented convenient of trajectory plan. But the images must be disposed by coordinate transformation, geometric correction. In addition, according to the requirement of mapping scale ,camera parameters and overlap degree we can calculate exposure hole interval and trajectory distance between the adjacent trajectory automatically . This will improve the degree of automation of data collection. Software will judge the position of next point according to the intersection of the trajectory and the survey area and ensure the position of point according to trajectory distance. We can undertake the points artificially. So the trajectory plan is automatic and flexible. Considering safety, the date can be used in flying after simulating flight. Finally we can export all of the date using a key

  5. Structural knowledge learning from maps for supervised land cover/use classification: Application to the monitoring of land cover/use maps in French Guiana

    NASA Astrophysics Data System (ADS)

    Bayoudh, Meriam; Roux, Emmanuel; Richard, Gilles; Nock, Richard

    2015-03-01

    The number of satellites and sensors devoted to Earth observation has become increasingly elevated, delivering extensive data, especially images. At the same time, the access to such data and the tools needed to process them has considerably improved. In the presence of such data flow, we need automatic image interpretation methods, especially when it comes to the monitoring and prediction of environmental and societal changes in highly dynamic socio-environmental contexts. This could be accomplished via artificial intelligence. The concept described here relies on the induction of classification rules that explicitly take into account structural knowledge, using Aleph, an Inductive Logic Programming (ILP) system, combined with a multi-class classification procedure. This methodology was used to monitor changes in land cover/use of the French Guiana coastline. One hundred and fifty-eight classification rules were induced from 3 diachronic land cover/use maps including 38 classes. These rules were expressed in first order logic language, which makes them easily understandable by non-experts. A 10-fold cross-validation gave significant average values of 84.62%, 99.57% and 77.22% for classification accuracy, specificity and sensitivity, respectively. Our methodology could be beneficial to automatically classify new objects and to facilitate object-based classification procedures.

  6. Semi-automatic mapping for identifying complex geobodies in seismic images

    NASA Astrophysics Data System (ADS)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  7. An overview of animal science research 1945-2011 through science mapping analysis.

    PubMed

    Rodriguez-Ledesma, A; Cobo, M J; Lopez-Pujalte, C; Herrera-Viedma, E

    2015-12-01

    The conceptual structure of the field of Animal Science (AS) research is examined by means of a longitudinal science mapping analysis. The whole of the AS research field is analysed, revealing its conceptual evolution. To this end, an automatic approach to detecting and visualizing hidden themes or topics and their evolution across a consecutive span of years was applied to AS publications of the JCR category 'Agriculture, Dairy & Animal Science' during the period 1945-2011. This automatic approach was based on a coword analysis and combines performance analysis and science mapping. To observe the conceptual evolution of AS, six consecutive periods were defined: 1945-1969, 1970-1979, 1980-1989, 1990-1999, 2000-2005 and 2006-2011. Research in AS was identified as having focused on ten main thematic areas: ANIMAL-FEEDING, SMALL-RUMINANTS, ANIMAL-REPRODUCTION, DAIRY-PRODUCTION, MEAT-QUALITY, SWINE-PRODUCTION, GENETICS-AND-ANIMAL-BREEDING, POULTRY, ANIMAL-WELFARE and GROWTH-FACTORS-AND-FATTY-ACIDS. The results show how genomic studies gain in weight and integrate with other thematic areas. The whole of AS research has become oriented towards an overall framework in which animal welfare, sustainable management and human health play a major role. All this would affect the future structure and management of livestock farming. © 2014 Blackwell Verlag GmbH.

  8. Change detection and classification in brain MR images using change vector analysis.

    PubMed

    Simões, Rita; Slump, Cornelis

    2011-01-01

    The automatic detection of longitudinal changes in brain images is valuable in the assessment of disease evolution and treatment efficacy. Most existing change detection methods that are currently used in clinical research to monitor patients suffering from neurodegenerative diseases--such as Alzheimer's--focus on large-scale brain deformations. However, such patients often have other brain impairments, such as infarcts, white matter lesions and hemorrhages, which are typically overlooked by the deformation-based methods. Other unsupervised change detection algorithms have been proposed to detect tissue intensity changes. The outcome of these methods is typically a binary change map, which identifies changed brain regions. However, understanding what types of changes these regions underwent is likely to provide equally important information about lesion evolution. In this paper, we present an unsupervised 3D change detection method based on Change Vector Analysis. We compute and automatically threshold the Generalized Likelihood Ratio map to obtain a binary change map. Subsequently, we perform histogram-based clustering to classify the change vectors. We obtain a Kappa Index of 0.82 using various types of simulated lesions. The classification error is 2%. Finally, we are able to detect and discriminate both small changes and ventricle expansions in datasets from Mild Cognitive Impairment patients.

  9. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  10. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    PubMed

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  12. Route learning in Korsakoff's syndrome: Residual acquisition of spatial memory despite profound amnesia.

    PubMed

    Oudman, Erik; Van der Stigchel, Stefan; Nijboer, Tanja C W; Wijnia, Jan W; Seekles, Maaike L; Postma, Albert

    2016-03-01

    Korsakoff's syndrome (KS) is characterized by explicit amnesia, but relatively spared implicit memory. The aim of this study was to assess to what extent KS patients can acquire spatial information while performing a spatial navigation task. Furthermore, we examined whether residual spatial acquisition in KS was based on automatic or effortful coding processes. Therefore, 20 KS patients and 20 matched healthy controls performed six tasks on spatial navigation after they navigated through a residential area. Ten participants per group were instructed to pay close attention (intentional condition), while 10 received mock instructions (incidental condition). KS patients showed hampered performance on a majority of tasks, yet their performance was superior to chance level on a route time and distance estimation tasks, a map drawing task and a route walking task. Performance was relatively spared on the route distance estimation task, but there were large variations between participants. Acquisition in KS was automatic rather than effortful, since no significant differences were obtained between the intentional and incidental condition on any task, whereas for the healthy controls, the intention to learn was beneficial for the map drawing task and the route walking task. The results of this study suggest that KS patients are still able to acquire spatial information during navigation on multiple domains despite the presence of the explicit amnesia. Residual acquisition is most likely based on automatic coding processes. © 2014 The British Psychological Society.

  13. Water Mapping Using Multispectral Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yan, W. Y.; Shaker, A.; LaRocque, P. E.

    2018-04-01

    This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.

  14. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  15. A new automatic SAR-based flood mapping application hosted on the European Space Agency's grid processing on demand fast access to imagery environment

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura

    2013-04-01

    There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood, UK and March 2010 Red River flood, US) observed by high-resolution SAR sensors as well as airborne photography highlight advantages and limitations of the online application. A mid-term target is the exploitation of ESA SENTINEL 1 SAR data streams. In the long term it is foreseen to develop a potential extension of the application for systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis. On-going research activities investigate the usefulness of the method for mapping flood hazard at global scale using databases of historic SAR remote sensing-derived flood inundation maps.

  16. Metadata mapping and reuse in caBIG™

    PubMed Central

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-01-01

    Background This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG™). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG™ framework or other frameworks that use metadata repositories. Results The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG™ framework and potentially any framework that uses a metadata repository. Conclusion This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG™. This effort contributes to facilitating the development of interoperable systems within caBIG™ as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies. PMID:19208192

  17. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  18. Flood hazards studies in the Mississippi River basin using remote sensing

    NASA Technical Reports Server (NTRS)

    Rango, A.; Anderson, A. T.

    1974-01-01

    The Spring 1973 Mississippi River flood was investigated using remotely sensed data from ERTS-1. Both manual and automatic analyses of the data indicated that ERTS-1 is extremely useful as a regional tool for flood mamagement. Quantitative estimates of area flooded were made in St. Charles County, Missouri and Arkansas. Flood hazard mapping was conducted in three study areas along the Mississippi River using pre-flood ERTS-1 imagery enlarged to 1:250,000 and 1:100,000 scale. Initial results indicate that ERTS-1 digital mapping of flood prone areas can be performed at 1:62,500 which is comparable to some conventional flood hazard map scales.

  19. Simultaneous orientation and thickness mapping in transmission electron microscopy

    DOE PAGES

    Tyutyunnikov, Dmitry; Özdöl, V. Burak; Koch, Christoph T.

    2014-12-04

    In this paper we introduce an approach for simultaneous thickness and orientation mapping of crystalline samples by means of transmission electron microscopy. We show that local thickness and orientation values can be extracted from experimental dark-field (DF) image data acquired at different specimen tilts. The method has been implemented to automatically acquire the necessary data and then map thickness and crystal orientation for a given region of interest. We have applied this technique to a specimen prepared from a commercial semiconductor device, containing multiple 22 nm technology transistor structures. The performance and limitations of our method are discussed and comparedmore » to those of other techniques available.« less

  20. Application of remote sensing to the photogeologic mapping of the region of the Itatiaia alkaline complex. M.S. Thesis; [Minas Gerais, Rio De Janeiro, Sao Paulo, and Itatiaia, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Rodrigues, J. E.

    1981-01-01

    Remote sensing methods applied to geologically complex areas, through interaction of ground truth and information obtained from multispectral LANDSAT images and radar mosaics were evaluated. The test area covers parts of Minos Gerais, Rio De Janeiro and Sao Paulo states and contains the alkaline complex of Itatiaia and surrounding Precambrian terrains. Geological and structural mapping was satisfactory; however, lithological varieties which form the massif's could not be identified. Photogeological lineaments were mapped, some of which represent the boundaries of stratigraphic units. Automatic processing was used to classify sedimentary areas, which includes the talus deposits of the alkaline massifs.

  1. Spectral mapping of soil organic matter

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Baumgardner, M. F.; Johannsen, C. J.

    1974-01-01

    Multispectral remote sensing data were examined for use in the mapping of soil organic matter content. Computer-implemented pattern recognition techniques were used to analyze data collected in May 1969 and May 1970 by an airborne multispectral scanner over a 40-km flightline. Two fields within the flightline were selected for intensive study. Approximately 400 surface soil samples from these fields were obtained for organic matter analysis. The analytical data were used as training sets for computer-implemented analysis of the spectral data. It was found that within the geographical limitations included in this study, multispectral data and automatic data processing techniques could be used very effectively to delineate and map surface soils areas containing different levels of soil organic matter.

  2. Sentinel-2 for rapid operational landslide inventory mapping

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Marc, Odin; Malet, Jean-Philippe; Michea, David

    2017-04-01

    Landslide inventory mapping after major triggering events such as heavy rainfalls or earthquakes is crucial for disaster response, the assessment of hazards, and the quantification of sediment budgets and empirical scaling laws. Numerous studies have already demonstrated the utility of very-high resolution satellite and aerial images for the elaboration of inventories based on semi-automatic methods or visual image interpretation. Nevertheless, such semi-automatic methods are rarely used in an operational context after major triggering events; this is partly due to access limitations on the required input datasets (i.e. VHR satellite images) and to the absence of dedicated services (i.e. processing chain) available for the landslide community. Several on-going initiatives allow to overcome these limitations. First, from a data perspective, the launch of the Sentinel-2 mission offers opportunities for the design of an operational service that can be deployed for landslide inventory mapping at any time and everywhere on the globe. Second, from an implementation perspective, the Geohazards Exploitation Platform (GEP) of the European Space Agency (ESA) allows the integration and diffusion of on-line processing algorithms in a high computing performance environment. Third, from a community perspective, the recently launched Landslide Pilot of the Committee on Earth Observation Satellites (CEOS), has targeted the take-off of such service as a main objective for the landslide community. Within this context, this study targets the development of a largely automatic, supervised image processing chain for landslide inventory mapping from bi-temporal (before and after a given event) Sentinel-2 optical images. The processing chain combines change detection methods, image segmentation, higher-level image features (e.g. texture, shape) and topographic variables. Based on a few representative examples provided by a human operator, a machine learning model is trained and subsequently used to distinguish newly triggered landslides from other landscape elements. The final map product is provided along with an uncertainty map that allows identifying areas which might require further considerations. The processing chain is tested for two recent and contrasted triggering events in New Zealand and Taiwan. A Mw 7.8 earthquake in New Zealand in November 2016 triggered tens of thousands of landslides in a complex environment, with important textural variations with elevations, due to vegetation change and snow cover. In contrast a large but unexceptional typhoon in July 2016 in Taiwan triggered a moderate amount of relatively small landslides in a lushly vegetated, more homogenous terrain. Based on the obtained results we discuss the potential and limitations of Sentinel-2 bi-temporal images and time-series for operational landslide inventory mapping This work is part of the General Studies Program (GSP) ALCANTARA of ESA.

  3. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  4. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  5. NREL Transportation Project to Reduce Fuel Usage

    Science.gov Websites

    and communication software was developed by NREL researchers to display a vehicle's location automatically and transmit a map of the its location over the Internet. After developing the communication vehicle location and communication technology to track and direct vehicle fleet movements," said the

  6. Wireless tracking of cotton modules Part II: automatic machine identification and system testing

    USDA-ARS?s Scientific Manuscript database

    Mapping the harvest location of cotton modules is essential to practical understanding and utilization of spatial-variability information in fiber quality. A wireless module-tracking system was recently developed, but automation of the system is required before it will find practical use on the far...

  7. Enabling the Interoperability of Large-Scale Legacy Systems

    DTIC Science & Technology

    2008-01-01

    information retrieval systems ( Salton and McGill 1983). We use this method because, in the schema mapping task, only one instance per class is...2001). A survey of approaches to automatic schema matching. The VLDB Journal, 10, 334-350. Salton , G., & McGill, M.J. (1983). Introduction to

  8. HelpfulMed: Intelligent Searching for Medical Information over the Internet.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Lally, Ann M.; Zhu, Bin; Chau, Michael

    2003-01-01

    Discussion of the information needs of medical professionals and researchers focuses on the architecture of a Web portal designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, and self-organizing map technologies to provide searchers with fine-grained results. Reports results of evaluation of spider algorithms…

  9. Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs

    DTIC Science & Technology

    2012-09-01

    much can result in behaviour similar to the shortest-path chains. 24 Ronald Goldman Neil Lewis Judge Lance Ito 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 jury...Connecting the Dots has also been explored in non-textual domains. The authors of [ Heath et al., 2010] propose building graphs, called Image Webs, to...could imagine a metro map summarizing a dataset of medical records. 2. Images: In [ Heath et al., 2010], Heath et al build graphs called Image Webs to rep

  10. Analysis of 2D Phase Contrast MRI in Renal Arteries by Self Organizing Maps

    NASA Astrophysics Data System (ADS)

    Zöllner, Frank G.; Schad, Lothar R.

    We present an approach based on self organizing maps to segment renal arteries from 2D PC Cine MR, images to measure blood velocity and flow. Such information are important in grading renal artery stenosis and support the decision on surgical interventions like percu-tan transluminal angioplasty. Results show that the renal arteries could be extracted automatically. The corresponding velocity profiles show high correlation (r=0.99) compared those from manual delineated vessels. Furthermore, the method could detect possible blood flow patterns within the vessel.

  11. Skylab-EREP studies in computer mapping of terrain in the Cripple Creek-Canon City area of Colorado

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.; Ranson, K. J.; Holstrom, R. L.

    1975-01-01

    Multispectral-scanner data from satellites are used as input to computers for automatically mapping terrain classes of ground cover. Some major problems faced in this remote-sensing task include: (1) the effect of mixtures of classes and, primarily because of mixtures, the problem of what constitutes accurate control data, and (2) effects of the atmosphere on spectral responses. The fundamental principles of these problems are presented along with results of studies of them for a test site of Colorado, using LANDSAT-1 data.

  12. Ecoregions and ecodistricts: Ecological regionalizations for the Netherlands' environmental policy

    NASA Astrophysics Data System (ADS)

    Klijn, Frans; de Waal, Rein W.; Oude Voshaar, Jan H.

    1995-11-01

    For communicating data on the state of the environment to policy makers, various integrative frameworks are used, including regional integration. For this kind of integration we have developed two related ecological regionalizations, ecoregions and ecodistricts, which are two levels in a series of classifications for hierarchically nested ecosystems at different spatial scale levels. We explain the compilation of the maps from existing geographical data, demonstrating the relatively holistic, a priori integrated approach. The resulting maps are submitted to discriminant analysis to test the consistancy of the use of mapping characteristics, using data on individual abiotic ecosystem components from a national database on a 1-km2 grid. This reveals that the spatial patterns of soil, groundwater, and geomorphology correspond with the ecoregion and ecodistrict maps. Differences between the original maps and maps formed by automatically reclassifying 1-km2 cells with these discriminant components are found to be few. These differences are discussed against the background of the principal dilemma between deductive, a priori integrated, and inductive, a posteriori, classification.

  13. Automated strip-mine and reclamation mapping from ERTS

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.

    1974-01-01

    The author has identified the following significant results. Computer processing techniques were applied to ERTS-1 computer-compatible tape (CCT) data acquired in August 1972 on the Ohio Power Company's coal mining operation in Muskingum County, Ohio. Processing results succeeded in automatically classifying, with an accuracy greater than 90%: (1) stripped earth and major sources of erosion; (2) partially reclaimed areas and minor sources of erosion; (3) water with sedimentation; (4) water without sedimentation; and (5) vegetation. Computer-generated tables listing the area in acres and square kilometers were produced for each target category. Processing results also included geometrically corrected map overlays, one for each target category, drawn on a transparent material by a pen under computer control. Each target category is assigned a distinctive color on the overlay to facilitate interpretation. The overlays, drawn at a scale of 1:250,000 when placed over an AMS map of the same area, immediately provided map locations for each target. These mapping products were generated at a tenth of the cost of conventional mapping techniques.

  14. Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.

    2018-03-01

    The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.

  15. NASA automatic subject analysis technique for extracting retrievable multi-terms (NASA TERM) system

    NASA Technical Reports Server (NTRS)

    Kirschbaum, J.; Williamson, R. E.

    1978-01-01

    Current methods for information processing and retrieval used at the NASA Scientific and Technical Information Facility are reviewed. A more cost effective computer aided indexing system is proposed which automatically generates print terms (phrases) from the natural text. Satisfactory print terms can be generated in a primarily automatic manner to produce a thesaurus (NASA TERMS) which extends all the mappings presently applied by indexers, specifies the worth of each posting term in the thesaurus, and indicates the areas of use of the thesaurus entry phrase. These print terms enable the computer to determine which of several terms in a hierarchy is desirable and to differentiate ambiguous terms. Steps in the NASA TERMS algorithm are discussed and the processing of surrogate entry phrases is demonstrated using four previously manually indexed STAR abstracts for comparison. The simulation shows phrase isolation, text phrase reduction, NASA terms selection, and RECON display.

  16. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  17. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  18. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Natural language processing of spoken diet records (SDRs).

    PubMed

    Lacson, Ronilda; Long, William

    2006-01-01

    Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.

  20. SA-SOM algorithm for detecting communities in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  1. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  2. Automatic Estimation of Volcanic Ash Plume Height using WorldView-2 Imagery

    NASA Technical Reports Server (NTRS)

    McLaren, David; Thompson, David R.; Davies, Ashley G.; Gudmundsson, Magnus T.; Chien, Steve

    2012-01-01

    We explore the use of machine learning, computer vision, and pattern recognition techniques to automatically identify volcanic ash plumes and plume shadows, in WorldView-2 imagery. Using information of the relative position of the sun and spacecraft and terrain information in the form of a digital elevation map, classification, the height of the ash plume can also be inferred. We present the results from applying this approach to six scenes acquired on two separate days in April and May of 2010 of the Eyjafjallajokull eruption in Iceland. These results show rough agreement with ash plume height estimates from visual and radar based measurements.

  3. Improved predictive mapping of indoor radon concentrations using ensemble regression trees based on automatic clustering of geological units.

    PubMed

    Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios Gruson, Martha; Baechler, Sébastien

    2015-09-01

    According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables like soil gas radon measurements as well as more detailed geological information. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Improving fMRI reliability in presurgical mapping for brain tumours.

    PubMed

    Stevens, M Tynan R; Clarke, David B; Stroink, Gerhard; Beyea, Steven D; D'Arcy, Ryan Cn

    2016-03-01

    Functional MRI (fMRI) is becoming increasingly integrated into clinical practice for presurgical mapping. Current efforts are focused on validating data quality, with reliability being a major factor. In this paper, we demonstrate the utility of a recently developed approach that uses receiver operating characteristic-reliability (ROC-r) to: (1) identify reliable versus unreliable data sets; (2) automatically select processing options to enhance data quality; and (3) automatically select individualised thresholds for activation maps. Presurgical fMRI was conducted in 16 patients undergoing surgical treatment for brain tumours. Within-session test-retest fMRI was conducted, and ROC-reliability of the patient group was compared to a previous healthy control cohort. Individually optimised preprocessing pipelines were determined to improve reliability. Spatial correspondence was assessed by comparing the fMRI results to intraoperative cortical stimulation mapping, in terms of the distance to the nearest active fMRI voxel. The average ROC-r reliability for the patients was 0.58±0.03, as compared to 0.72±0.02 in healthy controls. For the patient group, this increased significantly to 0.65±0.02 by adopting optimised preprocessing pipelines. Co-localisation of the fMRI maps with cortical stimulation was significantly better for more reliable versus less reliable data sets (8.3±0.9 vs 29±3 mm, respectively). We demonstrated ROC-r analysis for identifying reliable fMRI data sets, choosing optimal postprocessing pipelines, and selecting patient-specific thresholds. Data sets with higher reliability also showed closer spatial correspondence to cortical stimulation. ROC-r can thus identify poor fMRI data at time of scanning, allowing for repeat scans when necessary. ROC-r analysis provides optimised and automated fMRI processing for improved presurgical mapping. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Osm Poi Analyzer: a Platform for Assessing Position of POIs in Openstreetmap

    NASA Astrophysics Data System (ADS)

    Kashian, A.; Rajabifard, A.; Chen, Y.; Richter, K. F.

    2017-09-01

    In recent years, more and increased participation in Volunteered Geographical Information (VGI) projects provides enough data coverage for most places around the world for ordinary mapping and navigation purposes, however, the positional credibility of contributed data becomes more and more important to bring a long-term trust in VGI data. Today, it is hard to draw a definite traditional boundary between the authoritative map producers and the public map consumers and we observe that more and more volunteers are joining crowdsourcing activities for collecting geodata, which might result in higher rates of man-made mistakes in open map projects such as OpenStreetMap. While there are some methods for monitoring the accuracy and consistency of the created data, there is still a lack of advanced systems to automatically discover misplaced objects on the map. One feature type which is contributed daily to OSM is Point of Interest (POI). In order to understand how likely it is that a newly added POI represents a genuine real-world feature scientific means to calculate a probability of such a POI existing at that specific position is needed. This paper reports on a new analytic tool which dives into OSM data and finds co-existence patterns between one specific POI and its surrounding objects such as roads, parks and buildings. The platform uses a distance-based classification technique to find relationships among objects and tries to identify the high-frequency association patterns among each category of objects. Using such method, for each newly added POI, a probabilistic score would be generated, and the low scored POIs can be highlighted for editors for a manual check. The same scoring method can be used for existing registered POIs to check if they are located correctly. For a sample study, this paper reports on the evaluation of 800 pre-registered ATMs in Paris with associated scores to understand how outliers and fake entries could be detected automatically.

  6. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  7. Projection Mapping User Interface for Disabled People

    PubMed Central

    Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827

  8. Application of LANDSAT and Skylab data for land use mapping in Italy. [emphasizing the Alps Mountains

    NASA Technical Reports Server (NTRS)

    Bodechtel, J.; Nithack, J.; Dibernardo, G.; Hiller, K.; Jaskolla, F.; Smolka, A.

    1975-01-01

    Utilizing LANDSAT and Skylab multispectral imagery of 1972 and 1973, a land use map of the mountainous regions of Italy was evaluated at a scale of 1:250,000. Seven level I categories were identified by conventional methods of photointerpretation. Images of multispectral scanner (MSS) bands 5 and 7, or equivalents were mainly used. Areas of less than 200 by 200 m were classified and standard procedures were established for interpretation of multispectral satellite imagery. Land use maps were produced for central and southern Europe indicating that the existing land use maps could be updated and optimized. The complexity of European land use patterns, the intensive morphology of young mountain ranges, and time-cost calculations are the reasons that the applied conventional techniques are superior to automatic evaluation.

  9. Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map

    PubMed Central

    Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen

    2015-01-01

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543

  10. Projection Mapping User Interface for Disabled People.

    PubMed

    Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.

  11. Improved spatial coverage for brain 3D PRESS MRSI by automatic placement of outer-volume suppression saturation bands.

    PubMed

    Ozhinsky, Eugene; Vigneron, Daniel B; Nelson, Sarah J

    2011-04-01

    To develop a technique for optimizing coverage of brain 3D (1) H magnetic resonance spectroscopic imaging (MRSI) by automatic placement of outer-volume suppression (OVS) saturation bands (sat bands) and to compare the performance for point-resolved spectroscopic sequence (PRESS) MRSI protocols with manual and automatic placement of sat bands. The automated OVS procedure includes the acquisition of anatomic images from the head, obtaining brain and lipid tissue maps, calculating optimal sat band placement, and then using those optimized parameters during the MRSI acquisition. The data were analyzed to quantify brain coverage volume and data quality. 3D PRESS MRSI data were acquired from three healthy volunteers and 29 patients using protocols that included either manual or automatic sat band placement. On average, the automatic sat band placement allowed the acquisition of PRESS MRSI data from 2.7 times larger brain volumes than the conventional method while maintaining data quality. The technique developed helps solve two of the most significant problems with brain PRESS MRSI acquisitions: limited brain coverage and difficulty in prescription. This new method will facilitate routine clinical brain 3D MRSI exams and will be important for performing serial evaluation of response to therapy in patients with brain tumors and other neurological diseases. Copyright © 2011 Wiley-Liss, Inc.

  12. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  13. ShakeMap manual: technical manual, user's guide, and software guide

    USGS Publications Warehouse

    Wald, David J.; Worden, Bruce C.; Quitoriano, Vincent; Pankow, Kris L.

    2005-01-01

    ShakeMap (http://earthquake.usgs.gov/shakemap) --rapidly, automatically generated shaking and intensity maps--combines instrumental measurements of shaking with information about local geology and earthquake location and magnitude to estimate shaking variations throughout a geographic area. The results are rapidly available via the Web through a variety of map formats, including Geographic Information System (GIS) coverages. These maps have become a valuable tool for emergency response, public information, loss estimation, earthquake planning, and post-earthquake engineering and scientific analyses. With the adoption of ShakeMap as a standard tool for a wide array of users and uses came an impressive demand for up-to-date technical documentation and more general guidelines for users and software developers. This manual is meant to address this need. ShakeMap, and associated Web and data products, are rapidly evolving as new advances in communications, earthquake science, and user needs drive improvements. As such, this documentation is organic in nature. We will make every effort to keep it current, but undoubtedly necessary changes in operational systems take precedence over producing and making documentation publishable.

  14. Testing interactive effects of automatic and conflict control processes during response inhibition - A system neurophysiological study.

    PubMed

    Chmielewski, Witold X; Beste, Christian

    2017-02-01

    In everyday life successful acting often requires to inhibit automatic responses that might not be appropriate in the current situation. These response inhibition processes have been shown to become aggravated with increasing automaticity of pre-potent response tendencies. Likewise, it has been shown that inhibitory processes are complicated by a concurrent engagement in additional cognitive control processes (e.g. conflicting monitoring). Therefore, opposing processes (i.e. automaticity and cognitive control) seem to strongly impact response inhibition. However, possible interactive effects of automaticity and cognitive control for the modulation of response inhibition processes have yet not been examined. In the current study we examine this question using a novel experimental paradigm combining a Go/NoGo with a Simon task in a system neurophysiological approach combining EEG recordings with source localization analyses. The results show that response inhibition is less accurate in non-conflicting than in conflicting stimulus-response mappings. Thus it seems that conflicts and the resulting engagement in conflict monitoring processes, as reflected in the N2 amplitude, may foster response inhibition processes. This engagement in conflict monitoring processes leads to an increase in cognitive control, as reflected by an increased activity in the anterior and posterior cingulate areas, while simultaneously the automaticity of response tendencies is decreased. Most importantly, this study suggests that the quality of conflict processes in anterior cingulate areas and especially the resulting interaction of cognitive control and automaticity of pre-potent response tendencies are important factors to consider, when it comes to the modulation of response inhibition processes. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Semi-automatic knee cartilage segmentation

    NASA Astrophysics Data System (ADS)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  16. The challenge of forecasting impacts of flash floods: test of a simplified hydraulic approach and validation based on insurance claim data

    NASA Astrophysics Data System (ADS)

    Le Bihan, Guillaume; Payrastre, Olivier; Gaume, Eric; Moncoulon, David; Pons, Frédéric

    2017-11-01

    Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall-runoff models, generally aimed at estimating flood magnitudes - typically discharges or return periods - at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies). The proposed approach includes, in addition to a distributed rainfall-runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.

  17. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    PubMed

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  18. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model.

    PubMed

    Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent

    2010-04-01

    The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.

  19. MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data

    NASA Astrophysics Data System (ADS)

    Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno

    2017-04-01

    Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.

  20. Wireless tracking of cotton modules. Part I: Automatic message triggering

    USDA-ARS?s Scientific Manuscript database

    The ability to map profit across a cotton field would enable producers to see where money is being made or lost on their farms and to implement precise field management practices to ensure the highest return possible on each portion of a field. To this end, a wireless module-tracking system was rec...

  1. CAMUS: Automatically Mapping Cyber Assets to Mission and Users (PREPRINT)

    DTIC Science & Technology

    2009-10-01

    which machines regularly use a particular mail server. Armed with these basic data sources – LDAP, NetFlow traffic and user logs – fuselets were created... NetFlow traffic used in the demonstration has over ten thousand unique IP Addresses and is over one gigabyte in size. A number of high performance

  2. Resolving carbonate platform geometries on the Island of Bonaire, Caribbean Netherlands through semi-automatic GPR facies classification

    NASA Astrophysics Data System (ADS)

    Bowling, R. D.; Laya, J. C.; Everett, M. E.

    2018-07-01

    The study of exposed carbonate platforms provides observational constraints on regional tectonics and sea-level history. In this work Miocene-aged carbonate platform units of the Seroe Domi Formation are investigated on the island of Bonaire, located in the Southern Caribbean. Ground penetrating radar (GPR) was used to probe near-surface structural geometries associated with these lithologies. The single cross-island transect described herein allowed for continuous mapping of geologic structures on kilometre length scales. Numerical analysis was applied to the data in the form of k-means clustering of structure-parallel vectors derived from image structure tensors. This methodology enables radar facies along the survey transect to be semi-automatically mapped. The results provide subsurface evidence to support previous surficial and outcrop observations, and reveal complex stratigraphy within the platform. From the GPR data analysis, progradational clinoform geometries were observed on the northeast side of the island which support the tectonics and depositional trends of the region. Furthermore, several leeward-side radar facies are identified which correlate to environments of deposition conducive to dolomitization via reflux mechanisms.

  3. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  4. Automatized Photometric Monitoring of Active Galactic Nuclei with the 46cm Telescope of the Wise Observatory

    NASA Astrophysics Data System (ADS)

    Pozo Nuñez, Francisco; Chelouche, Doron; Kaspi, Shai; Niv, Saar

    2017-09-01

    We present the first results of an ongoing variability monitoring program of active galactic nuclei (AGNs) using the 46 cm telescope of the Wise Observatory in Israel. The telescope has a field of view of 1.25^\\circ × 0.84^\\circ and is specially equipped with five narrowband filters at 4300, 5200, 5700, 6200, and 7000 Å to perform photometric reverberation mapping studies of the central engine of AGNs. The program aims to observe a sample of 27 AGNs (V < 17 mag) selected according to tentative continuum and line time delay measurements obtained in previous works. We describe the autonomous operation of the telescope together with the fully automatic pipeline used to achieve high-performance unassisted observations, data reduction, and light curves extraction using different photometric methods. The science verification data presented here demonstrates the performance of the monitoring program in particular for efficiently photometric reverberation mapping of AGNs with additional capabilities to carry out complementary studies of other transient and variable phenomena such as variable stars studies.

  5. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    NASA Astrophysics Data System (ADS)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  6. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster Analysis.

    PubMed

    Gorzalczany, Marian B; Rudzinski, Filip

    2017-06-07

    This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.

  7. Resolving Carbonate Platform Geometries on the Island of Bonaire, Caribbean Netherlands through Semi-Automatic GPR Facies Classification

    NASA Astrophysics Data System (ADS)

    Bowling, R. D.; Laya, J. C.; Everett, M. E.

    2018-05-01

    The study of exposed carbonate platforms provides observational constraints on regional tectonics and sea-level history. In this work Miocene-aged carbonate platform units of the Seroe Domi Formation are investigated, on the island of Bonaire, located in the Southern Caribbean. Ground penetrating radar (GPR) was used to probe near-surface structural geometries associated with these lithologies. The single cross-island transect described herein allowed for continuous mapping of geologic structures on kilometer length scales. Numerical analysis was applied to the data in the form of k-means clustering of structure-parallel vectors derived from image structure tensors. This methodology enables radar facies along the survey transect to be semi-automatically mapped. The results provide subsurface evidence to support previous surficial and outcrop observations, and reveal complex stratigraphy within the platform. From the GPR data analysis, progradational clinoform geometries were observed on the northeast side of the island which supports the tectonics and depositional trends of the region. Furthermore, several leeward-side radar facies are identified which correlate to environments of deposition conducive to dolomitization via reflux mechanisms.

  8. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, M.F.; Bruhwiler, D.L.

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less

  9. Global Rapid Flood Mapping System with Spaceborne SAR Data

    NASA Astrophysics Data System (ADS)

    Yun, S. H.; Owen, S. E.; Hua, H.; Agram, P. S.; Fattahi, H.; Liang, C.; Manipon, G.; Fielding, E. J.; Rosen, P. A.; Webb, F.; Simons, M.

    2017-12-01

    As part of the Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards, at NASA's Jet Propulsion Laboratory and California Institute of Technology, we have developed an automated system that produces derived products for flood extent map generation using spaceborne SAR data. The system takes user's input of area of interest polygons and time window for SAR data search (pre- and post-event). Then the system automatically searches and downloads SAR data, processes them to produce coregistered SAR image pairs, and generates log amplitude ratio images from each pair. Currently the system is automated to support SAR data from the European Space Agency's Sentinel-1A/B satellites. We have used the system to produce flood extent maps from Sentinel-1 SAR data for the May 2017 Sri Lanka floods, which killed more than 200 people and displaced about 600,000 people. Our flood extent maps were delivered to the Red Cross to support response efforts. Earlier we also responded to the historic August 2016 Louisiana floods in the United States, which claimed 13 people's lives and caused over $10 billion property damage. For this event, we made synchronized observations from space, air, and ground in close collaboration with USGS and NOAA. The USGS field crews acquired ground observation data, and NOAA acquired high-resolution airborne optical imagery within the time window of +/-2 hours of the SAR data acquisition by JAXA's ALOS-2 satellite. The USGS coordinates of flood water boundaries were used to calibrate our flood extent map derived from the ALOS-2 SAR data, and the map was delivered to FEMA for estimating the number of households affected. Based on the lessons learned from this response effort, we customized the ARIA system automation for rapid flood mapping and developed a mobile friendly web app that can easily be used in the field for data collection. Rapid automatic generation of SAR-based global flood maps calibrated with independent observations from ground, air, and space will provide reliable snapshot extent of many flooding events. SAR missions with easy data access, such as the Sentinel-1 and NASA's upcoming NISAR mission, combined with the ARIA system, will enable forming a library of flood extent maps, which can soon support flood modeling community, by providing observation-based constraints.

  10. Use of remote sensing for land use policy formulation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Multispectral scanning, infrared imagery, thematic mapping, and spectroradiometry from LANDSAT, GOES, and ground based instruments are being used to determine conifer distribution, maximum and minimum temperatures, topography, and crop diseases in Michigan's lower Peninsula. Image interpretation and automatic digital processing information from LANDSAT data are employed to classify and map the coniferous forests. Radiant temperature data from GOES were compared to temperature readings from the climatological station network. Digital data from LANDSAT is being used to develop techniques for detecting, monitoring, and modeling land surface change. Improved reflectance signatures through spectroradiometry aided in the detection of viral diseases in blueberry fields and vineyards. Soil survey maps from aerial reconnaissance are included as well as information on education, conferences, and awards.

  11. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  12. Dynamic Quantitative Trait Locus Analysis of Plant Phenomic Data.

    PubMed

    Li, Zitong; Sillanpää, Mikko J

    2015-12-01

    Advanced platforms have recently become available for automatic and systematic quantification of plant growth and development. These new techniques can efficiently produce multiple measurements of phenotypes over time, and introduce time as an extra dimension to quantitative trait locus (QTL) studies. Functional mapping utilizes a class of statistical models for identifying QTLs associated with the growth characteristics of interest. A major benefit of functional mapping is that it integrates information over multiple timepoints, and therefore could increase the statistical power for QTL detection. We review the current development of computationally efficient functional mapping methods which provide invaluable tools for analyzing large-scale timecourse data that are readily available in our post-genome era. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.

    PubMed

    Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G

    2018-06-04

    Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Tier-scalable reconnaissance: the challenge of sensor optimization, sensor deployment, sensor fusion, and sensor interoperability

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; George, Thomas; Tarbell, Mark A.

    2007-04-01

    Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.

  15. A prostate CAD system based on multiparametric analysis of DCE T1-w, and DW automatically registered images

    NASA Astrophysics Data System (ADS)

    Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele

    2013-02-01

    Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.

  16. Neural Network Classification of Receiver Functions as a Step Towards Automatic Crustal Parameter Determination

    NASA Astrophysics Data System (ADS)

    Jemberie, A.; Dugda, M. T.; Reusch, D.; Nyblade, A.

    2006-12-01

    Neural networks are decision making mathematical/engineering tools, which if trained properly, can do jobs automatically (and objectively) that normally require particular expertise and/or tedious repetition. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of quality control (QC) and Event Identification (EI) on seismic datasets. We explore to apply the multiplayer Feed Forward (FF) Artificial Neural Networks (ANN) and Self- Organizing Maps (SOM) in combination with Hk stacking of receiver functions in an attempt to test the extent of the usefulness of automatic classification of receiver functions for crustal parameter determination. Feed- forward ANNs (FFNNs) are a supervised classification tool while self-organizing maps (SOMs) are able to provide unsupervised classification of large, complex geophysical data sets into a fixed number of distinct generalized patterns or modes. Hk stacking is a methodology that is used to stack receiver functions based on the relative arrival times of P-to-S converted phase and next two reverberations to determine crustal thickness H and Vp-to-Vs ratio (k). We use receiver functions from teleseismic events recorded by the 2000- 2002 Ethiopia Broadband Seismic Experiment. Preliminary results of applying FFNN neural network and Hk stacking of receiver functions for automatic receiver functions classification as a step towards an effort of automatic crustal parameter determination look encouraging. After training a FFNN neural network, the network could classify the best receiver functions from bad ones with a success rate of about 75 to 95%. Applying H? stacking on the receiver functions classified by this FFNN as the best receiver functions, we could obtain crustal thickness and Vp/Vs ratio of 31±4 km and 1.75±0.05, respectively, for the crust beneath station ARBA in the Main Ethiopian Rift. To make comparison, we applied Hk stacking on the receiver functions which we ourselves classified as the best set and found that the crustal thickness and Vp/Vs ratio are 31±2 km and 1.75±0.02, respectively.

  17. Semi-automatic Data Integration using Karma

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.

  18. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Effects of partitioning and scheduling sparse matrix factorization on communication and load balance

    NASA Technical Reports Server (NTRS)

    Venugopal, Sesh; Naik, Vijay K.

    1991-01-01

    A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.

  20. Calculus of nonrigid surfaces for geometry and texture manipulation.

    PubMed

    Bronstein, Alexander; Bronstein, Michael; Kimmel, Ron

    2007-01-01

    We present a geometric framework for automatically finding intrinsic correspondence between three-dimensional nonrigid objects. We model object deformation as near isometries and find the correspondence as the minimum-distortion mapping. A generalization of multidimensional scaling is used as the numerical core of our approach. As a result, we obtain the possibility to manipulate the extrinsic geometry and the texture of the objects as vectors in a linear space. We demonstrate our method on the problems of expression-invariant texture mapping onto an animated three-dimensional face, expression exaggeration, morphing between faces, and virtual body painting.

  1. KML Super Overlay to WMS Translator

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This translator is a server-based application that automatically generates KML super overlay configuration files required by Google Earth for map data access via the Open Geospatial Consortium WMS (Web Map Service) standard. The translator uses a set of URL parameters that mirror the WMS parameters as much as possible, and it also can generate a super overlay subdivision of any given area that is only loaded when needed, enabling very large areas of coverage at very high resolutions. It can make almost any dataset available as a WMS service visible and usable in any KML application, without the need to reformat the data.

  2. Automated Transformation of CDISC ODM to OpenClinica.

    PubMed

    Gessner, Sophia; Storck, Michael; Hegselmann, Stefan; Dugas, Martin; Soto-Rey, Iñaki

    2017-01-01

    Due to the increasing use of electronic data capture systems for clinical research, the interest in saving resources by automatically generating and reusing case report forms in clinical studies is growing. OpenClinica, an open-source electronic data capture system enables the reuse of metadata in its own Excel import template, hampering the reuse of metadata defined in other standard formats. One of these standard formats is the Operational Data Model for metadata, administrative and clinical data in clinical studies. This work suggests a mapping from Operational Data Model to OpenClinica and describes the implementation of a converter to automatically generate OpenClinica conform case report forms based upon metadata in the Operational Data Model.

  3. Automatic airline baggage counting using 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  4. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  5. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  6. OZONE MONITORING, MAPPING, AND PUBLIC OUTREACH ...

    EPA Pesticide Factsheets

    The U.S. EPA had developed a handbook to help state and local government officials implement ozone monitoring, mapping, and outreach programs. The handbook, called Ozone Monitoring, Mapping, and Public Outreach: Delivering Real-Time Ozone Information to Your Community, provides step-by-step instructions on how to: Design, site, operate, and maintain an ozone monitoring network. Install, configure, and operate the Automatic Data Transfer System Use MapGen software to create still-frame and animated ozone maps. Develop and outreach plan to communicate information about real-time ozone levels and their health effects to the public.This handbook was developed by EPA's EMPACT program. The program takes advantage of new technologies that make it possible to provide environmental information to the public in near real time. EMPACT is working with the 86 largest metropolitan areas of the country to help communities in these areas: Collect, manage and distribute time-relevant environmental information. Provide their residents with easy-to-understand information they can use in making informed, day-to-day decisions. Information

  7. Satellite freeze forecast system: Executive summary

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    A satellite-based temperature monitoring and prediction system consisting of a computer controlled acquisition, processing, and display system and the ten automated weather stations called by that computer was developed and transferred to the national weather service. This satellite freeze forecasting system (SFFS) acquires satellite data from either one of two sources, surface data from 10 sites, displays the observed data in the form of color-coded thermal maps and in tables of automated weather station temperatures, computes predicted thermal maps when requested and displays such maps either automatically or manually, archives the data acquired, and makes comparisons with historical data. Except for the last function, SFFS handles these tasks in a highly automated fashion if the user so directs. The predicted thermal maps are the result of two models, one a physical energy budget of the soil and atmosphere interface and the other a statistical relationship between the sites at which the physical model predicts temperatures and each of the pixels of the satellite thermal map.

  8. Urban forest topographical mapping using UAV LIDAR

    NASA Astrophysics Data System (ADS)

    Putut Ash Shidiq, Iqbal; Wibowo, Adi; Kusratmoko, Eko; Indratmoko, Satria; Ardhianto, Ronni; Prasetyo Nugroho, Budi

    2017-12-01

    Topographical data is highly needed by many parties, such as government institution, mining companies and agricultural sectors. It is not just about the precision, the acquisition time and data processing are also carefully considered. In relation with forest management, a high accuracy topographic map is necessary for planning, close monitoring and evaluating forest changes. One of the solution to quickly and precisely mapped topography is using remote sensing system. In this study, we test high-resolution data using Light Detection and Ranging (LiDAR) collected from unmanned aerial vehicles (UAV) to map topography and differentiate vegetation classes based on height in urban forest area of University of Indonesia (UI). The semi-automatic and manual classifications were applied to divide point clouds into two main classes, namely ground and vegetation. There were 15,806,380 point clouds obtained during the post-process, in which 2.39% of it were detected as ground.

  9. Segmentation algorithm on smartphone dual camera: application to plant organs in the wild

    NASA Astrophysics Data System (ADS)

    Bertrand, Sarah; Cerutti, Guillaume; Tougne, Laure

    2018-04-01

    In order to identify the species of a tree, the different organs that are the leaves, the bark, the flowers and the fruits, are inspected by botanists. So as to develop an algorithm that identifies automatically the species, we need to extract these objects of interest from their complex natural environment. In this article, we focus on the segmentation of flowers and fruits and we present a new method of segmentation based on an active contour algorithm using two probability maps. The first map is constructed via the dual camera that we can find on the back of the latest smartphones. The second map is made with the help of a multilayer perceptron (MLP). The combination of these two maps to drive the evolution of the object contour allows an efficient segmentation of the organ from a natural background.

  10. Consistency functional map propagation for repetitive patterns

    NASA Astrophysics Data System (ADS)

    Wang, Hao

    2017-09-01

    Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.

  11. Automatic mapping of strip mine operations from spacecraft data. [Ohio

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.

    1974-01-01

    The author has identified the following significant results. Computer techniques were applied to process ERTS tapes acquired over coal mining operations in southeastern Ohio on 21 August 1972 and 3 September 1973. ERTS products obtained included geometrically-correct map overlays, at scales from 1:24,000 to 1:250,000, showing stripped earth, partially reclaimed earth, water, and natural vegetation. Computer-generated tables listing the area covered by each land-water category in square kilometers were also produced. By comparing these mapping products, the study demonstrates the capability of ERTS to monitor changes in the extent of stripping and reclamation. NASA C-130 photography acquired on 7 September 1973 when compared with the ERTS products generated from the 3 September 1973 tape established the categorization accuracy to be better than 90%. It is estimated that the stripping and reclamation maps and data were produced from the ERTS CCTs at a tenth of the cost of conventional techniques.

  12. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After having tested and refined the image analysis processing for some typical images, we have recorded a macro with ImageJ-Fiji allowing to process all the images for a given DOM. As a result, the three different types of rocks can be semi-automatically mapped on large DOMs using a simple and efficient procedure. This allows to develop quantitative analyses of fault rock distribution and thickness, fault trace roughness/curvature and length, fault zone architecture, and alteration halos due to hydrothermal fluid-rock interaction. To improve our workflow, additional or different morphological operators could be integrated in our procedure to yield a better resolution on small and thin pseudotachylyte veins (e.g. perimeter/area ratio).

  13. Automatic identification of watercourses in flat and engineered landscapes by computing the skeleton of a LiDAR point cloud

    NASA Astrophysics Data System (ADS)

    Broersen, Tom; Peters, Ravi; Ledoux, Hugo

    2017-09-01

    Drainage networks play a crucial role in protecting land against floods. It is therefore important to have an accurate map of the watercourses that form the drainage network. Previous work on the automatic identification of watercourses was typically based on grids, focused on natural landscapes, and used mostly the slope and curvature of the terrain. We focus in this paper on areas that are characterised by low-lying, flat, and engineered landscapes; these are characteristic to the Netherlands for instance. We propose a new methodology to identify watercourses automatically from elevation data, it uses solely a raw classified LiDAR point cloud as input. We show that by computing twice a skeleton of the point cloud-once in 2D and once in 3D-and that by using the properties of the skeletons we can identify most of the watercourses. We have implemented our methodology and tested it for three different soil types around Utrecht, the Netherlands. We were able to detect 98% of the watercourses for one soil type, and around 75% for the worst case, when we compared to a reference dataset that was obtained semi-automatically.

  14. Automatic depth grading tool to successfully adapt stereoscopic 3D content to digital cinema and home viewing environments

    NASA Astrophysics Data System (ADS)

    Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry

    2013-03-01

    To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.

  15. Model-Based Reasoning: Using Visual Tools to Reveal Student Learning

    ERIC Educational Resources Information Center

    Luckie, Douglas; Harrison, Scott H.; Ebert-May, Diane

    2011-01-01

    Using visual models is common in science and should become more common in classrooms. Our research group has developed and completed studies on the use of a visual modeling tool, the Concept Connector. This modeling tool consists of an online concept mapping Java applet that has automatic scoring functions we refer to as Robograder. The Concept…

  16. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    ERIC Educational Resources Information Center

    Perez-Marin, Diana; Pascual-Nieto, Ismael

    2010-01-01

    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  17. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  18. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  19. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  20. Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.

    PubMed

    Zhang, Yu; You, Xuqun; Zhu, Rongjuan

    2016-07-01

    Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.

  1. Detection of Thermal Erosion Gullies from High-Resolution Images Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Huang, L.; Liu, L.; Jiang, L.; Zhang, T.; Sun, Y.

    2017-12-01

    Thermal erosion gullies, one type of thermokarst landforms, develop due to thawing of ice-rich permafrost. Mapping the location and extent of thermal erosion gullies can help understand the spatial distribution of thermokarst landforms and their temporal evolution. Remote sensing images provide an effective way for mapping thermokarst landforms, especially thermokarst lakes. However, thermal erosion gullies are challenging to map from remote sensing images due to their small sizes and significant variations in geometric/radiometric properties. It is feasible to manually identify these features, as a few previous studies have carried out. However manual methods are labor-intensive, therefore, cannot be used for a large study area. In this work, we conduct automatic mapping of thermal erosion gullies from high-resolution images by using Deep Learning. Our study area is located in Eboling Mountain (Qinghai, China). Within a 6 km2 peatland area underlain by ice-rich permafrost, at least 20 thermal erosional gullies are well developed. The image used is a 15-cm-resolution Digital Orthophoto Map (DOM) generated in July 2016. First, we extracted 14 gully patches and ten non-gully patches as training data. And we performed image augmentation. Next, we fine-tuned the pre-trained model of DeepLab, a deep-learning algorithm for semantic image segmentation based on Deep Convolutional Neural Networks. Then, we performed inference on the whole DOM and obtained intermediate results in forms of polygons for all identified gullies. At last, we removed misidentified polygons based on a few pre-set criteria on the size and shape of each polygon. Our final results include 42 polygons. Validated against field measurements using GPS, most of the gullies are detected correctly. There are 20 false detections due to the small number and low quality of training images. We also found three new gullies that missed in the field observations. This study shows that (1) despite a challenging mapping task, DeepLab can detect small, irregular-shaped thermal erosion gullies with high accuracy. (2) Automatic detection is critical for mapping thermal erosion gully since manual mapping or field work may miss some targets even in a relatively small region. (3) The quantity and quality of training data are crucial for detection accuracy.

  2. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  3. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  4. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    NASA Astrophysics Data System (ADS)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  5. MO-G-BRE-04: Automatic Verification of Daily Treatment Deliveries and Generation of Daily Treatment Reports for a MR Image-Guided Treatment Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, D; Li, X; Li, H

    2014-06-15

    Purpose: Two aims of this work were to develop a method to automatically verify treatment delivery accuracy immediately after patient treatment and to develop a comprehensive daily treatment report to provide all required information for daily MR-IGRT review. Methods: After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a novel MR-IGRT treatment machine, we designed a method to use 1) treatment plan files, 2) delivery log files, and 3) dosimetric calibration information to verify the accuracy and completeness of daily treatment deliveries. The method verifies the correctness of delivered treatment plans and beams, beammore » segments, and for each segment, the beam-on time and MLC leaf positions. Composite primary fluence maps are calculated from the MLC leaf positions and the beam-on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. We also designed the daily treatment delivery report by including all required information for MR-IGRT and physics weekly review - the plan and treatment fraction information, dose verification information, daily patient setup screen captures, and the treatment delivery verification results. Results: The parameters in the log files (e.g. MLC positions) were independently verified and deemed accurate and trustable. A computer program was developed to implement the automatic delivery verification and daily report generation. The program was tested and clinically commissioned with sufficient IMRT and 3D treatment delivery data. The final version has been integrated into a commercial MR-IGRT treatment delivery system. Conclusion: A method was developed to automatically verify MR-IGRT treatment deliveries and generate daily treatment reports. Already in clinical use since December 2013, the system is able to facilitate delivery error detection, and expedite physician daily IGRT review and physicist weekly chart review.« less

  6. Evaluation of Pan-Sharpening Methods for Automatic Shadow Detection in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    de Azevedo, Samara C.; Singh, Ramesh P.; da Silva, Erivaldo A.

    2017-04-01

    Finer spatial resolution of areas with tall objects within urban environment causes intense shadows that lead to wrong information in urban mapping. Due to the shadows, automatic detection of objects (such as buildings, trees, structures, towers) and to estimate the surface coverage from high spatial resolution is difficult. Thus, automatic shadow detection is the first necessary preprocessing step to improve the outcome of many remote sensing applications, particularly for high spatial resolution images. Efforts have been made to explore spatial and spectral information to evaluate such shadows. In this paper, we have used morphological attribute filtering to extract contextual relations in an efficient multilevel approach for high resolution images. The attribute selected for the filtering was the area estimated from shadow spectral feature using the Normalized Saturation-Value Difference Index (NSVDI) derived from pan-sharpening images. In order to assess the quality of fusion products and the influence on shadow detection algorithm, we evaluated three pan-sharpening methods - Intensity-Hue-Saturation (IHS), Principal Components (PC) and Gran-Schmidt (GS) through the image quality measures: Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Dimensionless Global Error in Synthesis (ERGAS) and Universal Image Quality Index (UIQI). Experimental results over Worldview II scene from São Paulo city (Brazil) show that GS method provides good correlation with original multispectral bands with no radiometric and contrast distortion. The automatic method using GS method for NSDVI generation clearly provide a clear distinction of shadows and non-shadows pixels with an overall accuracy more than 90%. The experimental results confirm the effectiveness of the proposed approach which could be used for further shadow removal and reliable for object recognition, land-cover mapping, 3D reconstruction, etc. especially in developing countries where land use and land cover are rapidly changing with tall objects within urban areas.

  7. A software tool to automatically assure and report daily treatment deliveries by a cobalt‐60 radiation therapy device

    PubMed Central

    Wooten, H. Omar; Green, Olga; Li, Harold H.; Liu, Shi; Li, Xiaoling; Rodriguez, Vivian; Mutic, Sasa; Kashani, Rojano

    2016-01-01

    The aims of this study were to develop a method for automatic and immediate verification of treatment delivery after each treatment fraction in order to detect and correct errors, and to develop a comprehensive daily report which includes delivery verification results, daily image‐guided radiation therapy (IGRT) review, and information for weekly physics reviews. After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a commercial MRI‐guided radiotherapy treatment machine, we designed a procedure to use 1) treatment plan files, 2) delivery log files, and 3) beam output information to verify the accuracy and completeness of each daily treatment delivery. The procedure verifies the correctness of delivered treatment plan parameters including beams, beam segments and, for each segment, the beam‐on time and MLC leaf positions. For each beam, composite primary fluence maps are calculated from the MLC leaf positions and segment beam‐on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. A daily treatment delivery report is designed to include all required information for IGRT and weekly physics reviews including the plan and treatment fraction information, daily beam output information, and the treatment delivery verification results. A computer program was developed to implement the proposed procedure of the automatic delivery verification and daily report generation for an MRI guided radiation therapy system. The program was clinically commissioned. Sensitivity was measured with simulated errors. The final version has been integrated into the commercial version of the treatment delivery system. The method automatically verifies the EBRT treatment deliveries and generates the daily treatment reports. Already in clinical use for over one year, it is useful to facilitate delivery error detection, and to expedite physician daily IGRT review and physicist weekly chart review. PACS number(s): 87.55.km PMID:27167269

  8. Mapping of the WHO-ART terminology on Snomed CT to improve grouping of related adverse drug reactions.

    PubMed

    Alecu, Iulian; Bousquet, Cedric; Mougin, Fleur; Jaulent, Marie-Christine

    2006-01-01

    The WHO-ART and MedDRA terminologies used for coding adverse drug reactions (ADR) do not provide formal definitions of terms. In order to improve groupings, we propose to map ADR terms to equivalent Snomed CT concepts through UMLS Metathesaurus. We performed such mappings on WHO-ART terms and can automatically classify them using a description logic definition expressing their synonymies. Our gold standard was a set of 13 MedDRA special search categories restricted to ADR terms available in WHO-ART. The overlapping of the groupings within the new structure of WHO-ART on the manually built MedDRA search categories showed a 71% success rate. We plan to improve our method in order to retrieve associative relations between WHO-ART terms.

  9. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  10. Path planning for planetary rover using extended elevation map

    NASA Technical Reports Server (NTRS)

    Nakatani, Ichiro; Kubota, Takashi; Yoshimitsu, Tetsuo

    1994-01-01

    This paper describes a path planning method for planetary rovers to search for paths on planetary surfaces. The planetary rover is required to travel safely over a long distance for many days over unfamiliar terrain. Hence it is very important how planetary rovers process sensory information in order to understand the planetary environment and to make decisions based on that information. As a new data structure for informational mapping, an extended elevation map (EEM) has been introduced, which includes the effect of the size of the rover. The proposed path planning can be conducted in such a way as if the rover were a point while the size of the rover is automatically taken into account. The validity of the proposed methods is verified by computer simulations.

  11. Model-based position correlation between breast images

    NASA Astrophysics Data System (ADS)

    Georgii, J.; Zöhrer, F.; Hahn, H. K.

    2013-02-01

    Nowadays, breast diagnosis is based on images of different projections and modalities, such that sensitivity and specificity of the diagnosis can be improved. However, this emburdens radiologists to find corresponding locations in these data sets, which is a time consuming task, especially since the resolution of the images increases and thus more and more data have to be considered in the diagnosis. Therefore, we aim at support radiologist by automatically synchronizing cursor positions between different views of the breast. Specifically, we present an automatic approach to compute the spatial correlation between MLO and CC mammogram or tomosynthesis projections of the breast. It is based on pre-computed finite element simulations of generic breast models, which are adapted to the patient-specific breast using a contour mapping approach. Our approach is designed to be fully automatic and efficient, such that it can be implemented directly into existing multimodal breast workstations. Additionally, it is extendable to support other breast modalities in future, too.

  12. Automatic Assembly of Combined Checkingfixture for Auto-Body Components Based Onfixture Elements Libraries

    NASA Astrophysics Data System (ADS)

    Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi

    In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.

  13. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  14. A corticostriatal deficit promotes temporal distortion of automatic action in ageing

    PubMed Central

    Matamales, Miriam; Skrbis, Zala; Bailey, Matthew R; Balsam, Peter D; Balleine, Bernard W; Götz, Jürgen

    2017-01-01

    The acquisition of motor skills involves implementing action sequences that increase task efficiency while reducing cognitive loads. This learning capacity depends on specific cortico-basal ganglia circuits that are affected by normal ageing. Here, combining a series of novel behavioural tasks with extensive neuronal mapping and targeted cell manipulations in mice, we explored how ageing of cortico-basal ganglia networks alters the microstructure of action throughout sequence learning. We found that, after extended training, aged mice produced shorter actions and displayed squeezed automatic behaviours characterised by ultrafast oligomeric action chunks that correlated with deficient reorganisation of corticostriatal activity. Chemogenetic disruption of a striatal subcircuit in young mice reproduced age-related within-sequence features, and the introduction of an action-related feedback cue temporarily restored normal sequence structure in aged mice. Our results reveal static properties of aged cortico-basal ganglia networks that introduce temporal limits to action automaticity, something that can compromise procedural learning in ageing. PMID:29058672

  15. Using Crowdsourced Trajectories for Automated OSM Data Entry Approach

    PubMed Central

    Basiri, Anahid; Amirian, Pouria; Mooney, Peter

    2016-01-01

    The concept of crowdsourcing is nowadays extensively used to refer to the collection of data and the generation of information by large groups of users/contributors. OpenStreetMap (OSM) is a very successful example of a crowd-sourced geospatial data project. Unfortunately, it is often the case that OSM contributor inputs (including geometry and attribute data inserts, deletions and updates) have been found to be inaccurate, incomplete, inconsistent or vague. This is due to several reasons which include: (1) many contributors with little experience or training in mapping and Geographic Information Systems (GIS); (2) not enough contributors familiar with the areas being mapped; (3) contributors having different interpretations of the attributes (tags) for specific features; (4) different levels of enthusiasm between mappers resulting in different number of tags for similar features and (5) the user-friendliness of the online user-interface where the underlying map can be viewed and edited. This paper suggests an automatic mechanism, which uses raw spatial data (trajectories of movements contributed by contributors to OSM) to minimise the uncertainty and impact of the above-mentioned issues. This approach takes the raw trajectory datasets as input and analyses them using data mining techniques. In addition, we extract some patterns and rules about the geometry and attributes of the recognised features for the purpose of insertion or editing of features in the OSM database. The underlying idea is that certain characteristics of user trajectories are directly linked to the geometry and the attributes of geographic features. Using these rules successfully results in the generation of new features with higher spatial quality which are subsequently automatically inserted into the OSM database. PMID:27649192

  16. Automated Snow Extent Mapping Based on Orthophoto Images from Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Spallek, Waldemar; Witek-Kasprzak, Matylda

    2018-04-01

    The paper presents the application of the k-means clustering in the process of automated snow extent mapping using orthophoto images generated using the Structure-from-Motion (SfM) algorithm from oblique aerial photographs taken by unmanned aerial vehicle (UAV). A simple classification approach has been implemented to discriminate between snow-free and snow-covered terrain. The procedure uses the k-means clustering and classifies orthophoto images based on the three-dimensional space of red-green-blue (RGB) or near-infrared-red-green (NIRRG) or near-infrared-green-blue (NIRGB) bands. To test the method, several field experiments have been carried out, both in situations when snow cover was continuous and when it was patchy. The experiments have been conducted using three fixed-wing UAVs (swinglet CAM by senseFly, eBee by senseFly, and Birdie by FlyTech UAV) on 10/04/2015, 23/03/2016, and 16/03/2017 within three test sites in the Izerskie Mountains in southwestern Poland. The resulting snow extent maps, produced automatically using the classification method, have been validated against real snow extents delineated through a visual analysis and interpretation offered by human analysts. For the simplest classification setup, which assumes two classes in the k-means clustering, the extent of snow patches was estimated accurately, with areal underestimation of 4.6% (RGB) and overestimation of 5.5% (NIRGB). For continuous snow cover with sparse discontinuities at places where trees or bushes protruded from snow, the agreement between automatically produced snow extent maps and observations was better, i.e. 1.5% (underestimation with RGB) and 0.7-0.9% (overestimation, either with RGB or with NIRRG). Shadows on snow were found to be mainly responsible for the misclassification.

  17. An assessment of the Height Above Nearest Drainage terrain descriptor for the thematic enhancement of automatic SAR-based flood monitoring services

    NASA Astrophysics Data System (ADS)

    Chow, Candace; Twele, André; Martinis, Sandro

    2016-10-01

    Flood extent maps derived from Synthetic Aperture Radar (SAR) data can communicate spatially-explicit information in a timely and cost-effective manner to support disaster management. Automated processing chains for SAR-based flood mapping have the potential to substantially reduce the critical time delay between the delivery of post-event satellite data and the subsequent provision of satellite derived crisis information to emergency management authorities. However, the accuracy of SAR-based flood mapping can vary drastically due to the prevalent land cover and topography of a given scene. While expert-based image interpretation with the consideration of contextual information can effectively isolate flood surface features, a fully-automated feature differentiation algorithm mainly based on the grey levels of a given pixel is comparatively more limited for features with similar SAR-backscattering characteristics. The inclusion of ancillary data in the automatic classification procedure can effectively reduce instances of misclassification. In this work, a near-global `Height Above Nearest Drainage' (HAND) index [10] was calculated with digital elevation data and drainage directions from the HydroSHEDS mapping project [2]. The index can be used to separate flood-prone regions from areas with a low probability of flood occurrence. Based on the HAND-index, an exclusion mask was computed to reduce water look-alikes with respect to the hydrologictopographic setting. The applicability of this near-global ancillary data set for the thematic improvement of Sentinel-1 and TerraSAR-X based services for flood and surface water monitoring has been validated both qualitatively and quantitatively. Application of a HAND-based exclusion mask resulted in improvements to the classification accuracy of SAR scenes with high amounts of water look-alikes and considerable elevation differences.

  18. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  19. Compensation of missing wedge effects with sequential statistical reconstruction in electron tomography.

    PubMed

    Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla

    2014-01-01

    Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.

  20. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  1. "Clinical brain profiling": a neuroscientific diagnostic approach for mental disorders.

    PubMed

    Peled, Abraham; Geva, Amir B

    2014-10-01

    Clinical brain profiling is an attempt to map a descriptive nosology in psychiatry to underlying constructs in neurobiology and brain dynamics. This paper briefly reviews the motivation behind clinical brain profiling (CBP) and presents some provisional validation using clinical assessments and meta-analyses of neuroscientific publications. The paper has four sections. In the first, we review the nature and motivation for clinical brain profiling. This involves a description of the key aspects of functional anatomy that can lead to psychopathology. These features constitute the dimensions or categories for a profile of brain disorders based upon pathophysiology. The second section describes a mapping or translation matrix that maps from symptoms and signs, of a descriptive sort, to the CBP dimensions that provide a more mechanistic explanation. We will describe how this mapping engenders archetypal diagnoses, referring readers to tables and figures. The third section addresses the construct validity of clinical brain profiling by establishing correlations between profiles based on clinical ratings of symptoms and signs under classical diagnostic categories with the corresponding profiles generated automatically using archetypal diagnoses. We then provide further validation by performing a cluster analysis on the symptoms and signs and showing how they correspond to the equivalent brain profiles based upon clinical and automatic diagnosis. In the fourth section, we address the construct validity of clinical brain profiling by looking for associations between pathophysiological mechanisms (such as connectivity and plasticity) and nosological diagnoses (such as schizophrenia and depression). Based upon the mechanistic perspective offered in the first section, we test some particular hypotheses about double dissociations using a meta-analysis of PubMed searches. The final section concludes with perspectives for the future and outstanding validation issues for clinical brain profiling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Hotspot detection in pancreatic neuroendocrine tumors: density approximation by α-shape maps

    NASA Astrophysics Data System (ADS)

    Niazi, M. K. K.; Hartman, Douglas J.; Pantanowitz, Liron; Gurcan, Metin N.

    2016-03-01

    The grading of neuroendocrine tumors of the digestive system is dependent on accurate and reproducible assessment of the proliferation with the tumor, either by counting mitotic figures or counting Ki-67 positive nuclei. At the moment, most pathologists manually identify the hotspots, a practice which is tedious and irreproducible. To better help pathologists, we present an automatic method to detect all potential hotspots in neuroendocrine tumors of the digestive system. The method starts by segmenting Ki-67 positive nuclei by entropy based thresholding, followed by detection of centroids for all Ki-67 positive nuclei. Based on geodesic distance, approximated by the nuclei centroids, we compute two maps: an amoeba map and a weighted amoeba map. These maps are later combined to generate the heat map, the segmentation of which results in the hotspots. The method was trained on three and tested on nine whole slide images of neuroendocrine tumors. When evaluated by two expert pathologists, the method reached an accuracy of 92.6%. The current method does not discriminate between tumor, stromal and inflammatory nuclei. The results show that α-shape maps may represent how hotspots are perceived.

  3. Comparison of traditional burn wound mapping with a computerized program.

    PubMed

    Williams, James F; King, Booker T; Aden, James K; Serio-Melvin, Maria; Chung, Kevin K; Fenrich, Craig A; Salinas, José; Renz, Evan M; Wolf, Steven E; Blackbourne, Lorne H; Cancio, Leopoldo C

    2013-01-01

    Accurate burn estimation affects the use of burn resuscitation formulas and treatment strategies, and thus can affect patient outcomes. The objective of this process-improvement project was to compare the accuracy of a computer-based burn mapping program, WoundFlow (WF), with the widely used hand-mapped Lund-Browder (LB) diagram. Manikins with various burn representations (from 1% to more than 60% TBSA) were used for comparison of the WF system and LB diagrams. Burns were depicted on the manikins using red vinyl adhesive. Healthcare providers responsible for mapping of burn patients were asked to perform burn mapping of the manikins. Providers were randomized to either an LB or a WF group. Differences in the total map area between groups were analyzed. Also, direct measurements of the burn representations were taken and compared with LB and WF results. The results of 100 samples, compared using Bland-Altman analysis, showed no difference between the two methods. WF was as accurate as LB mapping for all burn surface areas. WF may be additionally beneficial in that it can track daily progress until complete wound closure, and can automatically calculate burn size, thus decreasing the chances of mathematical errors.

  4. A data fusion framework for floodplain analysis using GIS and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Necsoiu, Dorel Marius

    Throughout history floods have been part of the human experience. They are recurring phenomena that form a necessary and enduring feature of all river basin and lowland coastal systems. In an average year, they benefit millions of people who depend on them. In the more developed countries, major floods can be the largest cause of economic losses from natural disasters, and are also a major cause of disaster-related deaths in the less developed countries. Flood disaster mitigation research was conducted to determine how remotely sensed data can effectively be used to produce accurate flood plain maps (FPMs), and to identify/quantify the sources of error associated with such data. Differences were analyzed between flood maps produced by an automated remote sensing analysis tailored to the available satellite remote sensing datasets (rFPM), the 100-year flooded areas "predicted" by the Flood Insurance Rate Maps, and FPMs based on DEM and hydrological data (aFPM). Landuse/landcover was also examined to determine its influence on rFPM errors. These errors were identified and the results were integrated in a GIS to minimize landuse/landcover effects. Two substantial flood events were analyzed. These events were selected because of their similar characteristics (i.e., the existence of FIRM or Q3 data; flood data which included flood peaks, rating curves, and flood profiles; and DEM and remote sensing imagery). Automatic feature extraction was determined to be an important component for successful flood analysis. A process network, in conjunction with domain specific information, was used to map raw remotely sensed data onto a representation that is more compatible with a GIS data model. From a practical point of view, rFPM provides a way to automatically match existing data models to the type of remote sensing data available for each event under investigation. Overall, results showed how remote sensing could contribute to the complex problem of flood management by providing an efficient way to revise the National Flood Insurance Program maps.

  5. keep your models up-to-date: connecting community mapping data to complex urban flood modelling

    NASA Astrophysics Data System (ADS)

    Winsemius, Hessel; Eilander, Dirk; Ward, Philip; Diaz Loaiza, Andres; Iliffe, Mark; Mawanda, Shaban; Luo, Tianyi; Kimacha, Nyambiri; Chen, Jorik

    2017-04-01

    The world is urbanizing rapidly. According to the United Nation's World Urbanization Prospect, 50% of the global population already lives in urban areas today. This number is expected to grow to 66% by 2050. The rapid changes in these urban environments go hand in hand with rapid changes in natural hazard risks, in particular in informal unplanned neighbourhoods. In Dar Es Salaam - Tanzania, flood risk dominates and given the rapid changes in the city, continuous updates of detailed street level hazard and risk mapping are needed to adequately support decision making for urban planning, infrastructure design and disaster response. Over the past years, the Ramani Huria and Zuia Mafuriko projects have mapped the most flood prone neighbourhoods, including roads, buildings, drainage and land use and contributed data to the open-source OpenStreetMap database. In this contribution, we will demonstrate how we mobilize these contributed data to establish dynamic flood models for Dar Es Salaam and keep these up-to-date by making a direct link between the data, and model schematization. The tools automatically establish a sound 1D drainage network as well as a high resolution terrain dataset, by fusing the OpenStreetMap data with existing lower resolution terrain data such as the globally available satellite based SRTM 30. It then translates these fully automatically into the inputs required for the D-HYDRO modeling suite. Our tools are built such that community and stakeholder knowledge can be included in the model details through workshops with the tools so that missing essential information about the city's details can be augmented on-the-fly. This process creates a continuous dialogue between members of the community that collect data, and stakeholders requiring data for flood models. Moreover, used taxonomy and data filtering can be configured to conditions in other cities, making the tools generic and scalable. The tools are made available open-source.

  6. Environmental mapping and monitoring of Iceland by remote sensing (EMMIRS)

    NASA Astrophysics Data System (ADS)

    Pedersen, Gro B. M.; Vilmundardóttir, Olga K.; Falco, Nicola; Sigurmundsson, Friðþór S.; Rustowicz, Rose; Belart, Joaquin M.-C.; Gísladóttir, Gudrun; Benediktsson, Jón A.

    2016-04-01

    Iceland is exposed to rapid and dynamic landscape changes caused by natural processes and man-made activities, which impact and challenge the country. Fast and reliable mapping and monitoring techniques are needed on a big spatial scale. However, currently there is lack of operational advanced information processing techniques, which are needed for end-users to incorporate remote sensing (RS) data from multiple data sources. Hence, the full potential of the recent RS data explosion is not being fully exploited. The project Environmental Mapping and Monitoring of Iceland by Remote Sensing (EMMIRS) bridges the gap between advanced information processing capabilities and end-user mapping of the Icelandic environment. This is done by a multidisciplinary assessment of two selected remote sensing super sites, Hekla and Öræfajökull, which encompass many of the rapid natural and man-made landscape changes that Iceland is exposed to. An open-access benchmark repository of the two remote sensing supersites is under construction, providing high-resolution LIDAR topography and hyperspectral data for land-cover and landform classification. Furthermore, a multi-temporal and multi-source archive stretching back to 1945 allows a decadal evaluation of landscape and ecological changes for the two remote sensing super sites by the development of automated change detection techniques. The development of innovative pattern recognition and machine learning-based approaches to image classification and change detection is one of the main tasks of the EMMIRS project, aiming to extract and compute earth observation variables as automatically as possible. Ground reference data collected through a field campaign will be used to validate the implemented methods, which outputs are then inferred with geological and vegetation models. Here, preliminary results of an automatic land-cover classification based on hyperspectral image analysis are reported. Furthermore, the EMMIRS project investigates the complex landscape dynamics between geological and ecological processes. This is done through cross-correlation of mapping results and implementation of modelling techniques that simulate geological and ecological processes in order to extrapolate the landscape evolution

  7. Automatic mapping of urban areas from Landsat data using impervious surface fraction algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, S. T.; Chen, C. F.; Chen, C. R.

    2014-12-01

    Urbanization is a result of aggregation of people in urban areas that can help advance socioeconomic development and pull out people from the poverty line. However, if not monitored well, it can also lead to loss of farmlands, natural forests as well as to societal impacts including burgeoning growth of slums, pollution, and crime. Thus, spatiotemporal information that shapes the urbanization is thus critical to the process of urban planning. The overall objective of this study is to develop an impervious surface fraction algorithm (ISFA) for automatically mapping urban areas from Landsat data. We processed the data for 1986, 2001 and 2014 to trace the multi-decadal spatiotemporal change of Honduran capital city using a three-step procedure: (1) data pre-processing to perform image normalization as well as to produce the difference in the values (DVSS) between the simple ratio (SR) of green and shortwave bands and the soil adjust vegetation index (SAVI), (2) quantification of urban areas using ISFA, and (3) accuracy assessment of mapping results using the ground reference data constructed using land-cover maps and FORMOSAT-2 imagery. The mapping accuracy assessment was performed for 2001 and 2014 by comparing with the ground reference data indicated satisfactory results with the overall accuracies and Kappa coefficients generally higher than 90% and 0.8, respectively. When examining the urbanization between these years, it could be observed that the urban area was significantly expanded from 1986 to 2014, mainly driven by two factors of rapid population growth and socioeconomic development. This study eventually leads to a realization of the merit of using ISFA for multi-decadal monitoring of the urbanization of Honduran capital city from Landsat data. Results from this research can be used by urban planners as a general indicator to quantify urban change and environmental impacts. The methods were thus transferable to monitor urban growth in cities and their peri areas around the world.

  8. Tinnitus, anxiety and automatic processing of affective information: an explorative study.

    PubMed

    Ooms, Els; Vanheule, Stijn; Meganck, Reitske; Vinck, Bart; Watelet, Jean-Baptiste; Dhooge, Ingeborg

    2013-03-01

    Anxiety is found to play an important role in the severity complaint of tinnitus patients. However, when investigating anxiety in tinnitus patients, most studies make use of verbal reports of affect (e.g., self-report questionnaires and/or interviews). These methods reflect conscious appraisals of anxiety, but do not map underlying processing mechanisms. Nonetheless, such mechanisms, like the automatic processing of affective information, are important as they modulate emotional experience and emotion-related behaviour. Research showed that highly anxious people process threatening information (e.g., fearful and angry faces) faster than non-anxious people. Therefore, this study investigates whether tinnitus patients process affective stimuli (happy, sad, fearful, and angry faces) in the same way as highly anxious people do. Our sample consisted out of 67 consecutive tinnitus patients. Relationships between tinnitus severity, pitch, loudness, hearing loss, and the automatic processing of affective information were explored. Results indicate that especially in severely distressed tinnitus patients, the severity complaint is highly related to the automatic processing of fearful (r = 0.37, p < 0.05), angry (r = 0.44, p < 0.00) and happy (r = -0.44, p < 0.00) faces, and these relationships became even stronger after controlling for hearing loss. Furthermore, in contrast with findings on the relation between audiological characteristics (pitch and loudness) and conscious report of anxiety, we did find that the audiological characteristic, loudness, tends to be in some degree related to the automatic processing of fearful faces (r = 0.25, p = 0.08). We conclude that tinnitus is an anxiety-related problem on an automatic processing level.

  9. USGS ShakeMap Developments, Implementation, and Derivative Tools

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Lin, K.; Quitoriano, V.; Worden, B.

    2007-12-01

    We discuss ongoing development and enhancements of ShakeMap, a system for automatically generating maps of ground shaking and intensity in the minutes following an earthquake. The rapid availability of these maps is of particular value to emergency response organizations, utilities, insurance companies, government decision- makers, the media, and the general public. ShakeMap Version 3.2 was released in March, 2007, on a download site which allows ShakeMap developers to track operators' updates and provide follow-up information; V3.2 has now been downloaded in 15 countries. The V3.2 release supports LINUX in addition to other UNIX operating systems and adds enhancements to XML, KML, metadata, and other products. We have also added an uncertainty measure, quantified as a function of spatial location. Uncertainty is essential for evaluating the range of possible losses. Though not released in V3.2, we will describe a new quantitative uncertainty letter grading for each ShakeMap produced, allowing users to gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of their post-earthquake critical decision-making process. Since the V3.2 release, several new ground motion predictions equations have also been added to the prediction equation modules. ShakeMap is implemented in several new regions as reported in this Session. Within the U.S., robust systems serve California, Nevada, Utah, Washington and Oregon, Hawaii, and Anchorage. Additional systems are in development and efforts to provide backup capabilities for all Advanced National Seismic System (ANSS) regions at the National Earthquake Information Center are underway. Outside the U.S., this Session has descriptions of ShakeMap systems in Italy, Switzerland, Romania, and Turkey, among other countries. We also describe our predictive global ShakeMap system for the rapid evaluation of significant earthquakes globally for the Prompt Assessment of Global Earthquakes for Response (PAGER) system. These global ShakeMaps are constrained by rapidly gathered intensity data via the Internet and by finite fault and aftershock analyses for portraying fault rupture dimensions. As part of the PAGER loss calibration process we have produced an Atlas of ShakeMaps for significant earthquakes around the globe since 1973 (Allen and others, this Session); these Atlas events have additional constraints provided by archival strong motion, faulting dimensions, and macroseismic intensity data. We also describe derivative tools for further utilizing ShakeMap including ShakeCast, a fully automated system for delivering specific ShakeMap products to critical users and triggering established post-earthquake response protocols. We have released ShakeCast Version 2.0 (Lin and others, this Session), which allows RSS feeds for automatically receiving ShakeMap files, auto-launching of post-download processing scripts, and delivering notifications based on users' likely facility damage states derived from ShakeMap shaking parameters. As part of our efforts to produce estimated ShakeMaps globally, we have developed a procedure for deriving Vs30 estimates from correlations with topographic slope, and we have now implemented a global Vs30 Server, allowing users to generate Vs30 maps for custom user-selected regions around the globe (Allen and Wald, this Session). Finally, as a further derivative product of the ShakeMap Atlas project, we will present a shaking hazard Map for the past 30 years based on approximately 3,900 earthquake ShakeMaps of historic earthquakes.

  10. Linking quality indicators to clinical trials: an automated approach

    PubMed Central

    Coiera, Enrico; Choong, Miew Keen; Tsafnat, Guy; Hibbert, Peter; Runciman, William B.

    2017-01-01

    Abstract Objective Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. Design A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. Intervention Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. Main Outcome Measure Number of care indicators that mapped to outcome measures in clinical trials. Results While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. Conclusions We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence. PMID:28651340

  11. Behavioral and electrophysiological evidence for early and automatic detection of phonological equivalence in variable speech inputs.

    PubMed

    Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina

    2011-11-01

    Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.

  12. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  13. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  14. Uav Borne Low Altitude Photogrammetry System

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Su, G.; Xie, F.

    2012-07-01

    In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.

  15. Reconstruction of metabolic pathways by combining probabilistic graphical model-based and knowledge-based methods

    PubMed Central

    2014-01-01

    Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614

  16. Aural mapping of STEM concepts using literature mining

    NASA Astrophysics Data System (ADS)

    Bharadwaj, Venkatesh

    Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.

  17. Hypotheses generation as supervised link discovery with automated class labeling on large-scale biomedical concept networks

    PubMed Central

    2012-01-01

    Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper. PMID:22759614

  18. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  19. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search

    PubMed Central

    Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H.; Li, Shutao; Farsiu, Sina

    2017-01-01

    We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique. PMID:28663902

  20. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search.

    PubMed

    Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H; Li, Shutao; Farsiu, Sina

    2017-05-01

    We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.

  1. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    NASA Astrophysics Data System (ADS)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  2. Small passenger car transmission test-Chevrolet 200 transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1980-01-01

    The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.

  3. Increasing situation awareness of the CBRNE robot operators

    NASA Astrophysics Data System (ADS)

    Jasiobedzki, Piotr; Ng, Ho-Kong; Bondy, Michel; McDiarmid, Carl H.

    2010-04-01

    Situational awareness of CBRN robot operators is quite limited, as they rely on images and measurements from on-board detectors. This paper describes a novel framework that enables a uniform and intuitive access to live and recent data via 2D and 3D representations of visited sites. These representations are created automatically and augmented with images, models and CBRNE measurements. This framework has been developed for CBRNE Crime Scene Modeler (C2SM), a mobile CBRNE mapping system. The system creates representations (2D floor plans and 3D photorealistic models) of the visited sites, which are then automatically augmented with CBRNE detector measurements. The data stored in a database is accessed using a variety of user interfaces providing different perspectives and increasing operators' situational awareness.

  4. Automatic detection, tracking and sensor integration

    NASA Astrophysics Data System (ADS)

    Trunk, G. V.

    1988-06-01

    This report surveys the state of the art of automatic detection, tracking, and sensor integration. In the area of detection, various noncoherent integrators such as the moving window integrator, feedback integrator, two-pole filter, binary integrator, and batch processor are discussed. Next, the three techniques for controlling false alarms, adapting thresholds, nonparametric detectors, and clutter maps are presented. In the area of tracking, a general outline is given of a track-while-scan system, and then a discussion is presented of the file system, contact-entry logic, coordinate systems, tracking filters, maneuver-following logic, tracking initiating, track-drop logic, and correlation procedures. Finally, in the area of multisensor integration the problems of colocated-radar integration, multisite-radar integration, radar-IFF integration, and radar-DF bearing strobe integration are treated.

  5. Singing numbers…in cognitive space--a dual-task study of the link between pitch, space, and numbers.

    PubMed

    Fischer, Martin H; Riello, Marianna; Giordano, Bruno L; Rusconi, Elena

    2013-04-01

    We assessed the automaticity of spatial-numerical and spatial-musical associations by testing their intentionality and load sensitivity in a dual-task paradigm. In separate sessions, 16 healthy adults performed magnitude and pitch comparisons on sung numbers with variable pitch. Stimuli and response alternatives were identical, but the relevant stimulus attribute (pitch or number) differed between tasks. Concomitant tasks required retention of either color or location information. Results show that spatial associations of both magnitude and pitch are load sensitive and that the spatial association for pitch is more powerful than that for magnitude. These findings argue against the automaticity of spatial mappings in either stimulus dimension. Copyright © 2013 Cognitive Science Society, Inc.

  6. Theoretical Accuracy of Global Snow-Cover Mapping Using Satellite Data in the Earth Observing System (EOS) Era

    NASA Technical Reports Server (NTRS)

    Hall, D. K.; Foster, J. L.; Salomonson, V. V.; Klein, A. G.; Chien, J. Y. L.

    1998-01-01

    Following the launch of the Earth Observing System first morning (EOS-AM1) satellite, daily, global snow-cover mapping will be performed automatically at a spatial resolution of 500 m, cloud-cover permitting, using Moderate Resolution Imaging Spectroradiometer (MODIS) data. A technique to calculate theoretical accuracy of the MODIS-derived snow maps is presented. Field studies demonstrate that under cloud-free conditions when snow cover is complete, snow-mapping errors are small (less than 1%) in all land covers studied except forests where errors are greater and more variable. The theoretical accuracy of MODIS snow-cover maps is largely determined by percent forest cover north of the snowline. Using the 17-class International Geosphere-Biosphere Program (IGBP) land-cover maps of North America and Eurasia, the Northern Hemisphere is classified into seven land-cover classes and water. Snow-mapping errors estimated for each of the seven land-cover classes are extrapolated to the entire Northern Hemisphere for areas north of the average continental snowline for each month. Average monthly errors for the Northern Hemisphere are expected to range from 5 - 10%, and the theoretical accuracy of the future global snow-cover maps is 92% or higher. Error estimates will be refined after the first full year that MODIS data are available.

  7. Automated Plantation Mapping in Indonesia Using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Jia, X.; Khandelwal, A.; Kumar, V.

    2017-12-01

    Plantation mapping is critical for understanding and addressing deforestation, a key driver of climate change and ecosystem degradation. Unfortunately, most plantation maps are limited to small areas for specific years because they rely on visual inspection of imagery. In this work, we propose a data-driven approach which automatically generates yearly plantation maps for large regions using MODIS multi-spectral data. While traditional machine learning algorithms face manifold challenges in this task, e.g. imperfect training labels, spatio-temporal data heterogeneity, noisy and high-dimensional data, lack of evaluation data, etc., we introduce a novel deep learning-based framework that combines existing imperfect plantation products as training labels and models the spatio-temporal relationships of land covers. We also explores the post-processing steps based on Hidden Markov Model that further improve the detection accuracy. Then we conduct extensive evaluation of the generated plantation maps. Specifically, by randomly sampling and comparing with high-resolution Digital Globe imagery, we demonstrate that the generated plantation maps achieve both high precision and high recall. When compared with existing plantation mapping products, our detection can avoid both false positives and false negatives. Finally, we utilize the generated plantation maps in analyzing the relationship between forest fires and growth of plantations, which assists in better understanding the cause of deforestation in Indonesia.

  8. Reading Guided by Automated Graphical Representations: How Model-Based Text Visualizations Facilitate Learning in Reading Comprehension Tasks

    ERIC Educational Resources Information Center

    Pirnay-Dummer, Pablo; Ifenthaler, Dirk

    2011-01-01

    Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…

  9. Impact of Machine-Translated Text on Entity and Relationship Extraction

    DTIC Science & Technology

    2014-12-01

    20 1 1. Introduction Using social network analysis tools is an important asset in...semantic modeling software to automatically build detailed network models from unstructured text. Contour imports unstructured text and then maps the text...onto an existing ontology of frames at the sentence level, using FrameNet, a structured language model, and through Semantic Role Labeling ( SRL

  10. The Development of Automaticity in Short-Term Memory Search: Item-Response Learning and Category Learning

    ERIC Educational Resources Information Center

    Cao, Rui; Nosofsky, Robert M.; Shiffrin, Richard M.

    2017-01-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across…

  11. Mapping Educational Equity and Reform Policy in the Borderlands: LatCrit Spatial Analysis of Grade Retention

    ERIC Educational Resources Information Center

    Rodríguez, Cristóbal; Amador, Adam; Tarango, B. Abigail

    2016-01-01

    The purpose of our study is to investigate reform policy, specifically a proposed third grade reading retention policy within the Borderlands. Under this policy, students not performing proficiently on the third grade reading standardized exam will be automatically retained in the third grade. The research methods and approach used in this study…

  12. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    PubMed

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  13. Evaluation of satellite remote sensing and automatic data techniques for characterization of wetlands and coastal marshlands. [Atchafalaya River Basin, Louisiana

    NASA Technical Reports Server (NTRS)

    Cartmill, R. H. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The evaluation was conducted in a humid swamp and marsh area of southern Louisiana. ERTS digital multispectral scanner data was compared with similar data gathered by intermediate altitude aircraft. Automatic data processing was applied to several data sets to produce simulated color infrared images, analysis of single bands, thematic maps, and surface classifications. These products were used to determine the effectiveness of satellites to monitor accretion of land, locate aquatic plants, determine water characteristics, and identify marsh and forest species. The results show that to some extent all of these can be done with satellite data. It is most effective for monitoring accretion and least effective in locating aquatic plants. The data sets used show that the ERTS data is superior in mapping quality and accuracy to the aircraft data. However, in some applications requiring high resolution or maximum use of intermittent clear weather conditions, data gathering by aircraft is preferable. Data processing costs for equivalent areas are about three times greater for aircraft data than ERTS data. This is primarily because of the larger volume of data generated by the high resolution aircraft system.

  14. Behavioural and psychophysiological correlates of athletic performance: a test of the multi-action plan model.

    PubMed

    Bertollo, Maurizio; Bortoli, Laura; Gramaccioni, Gianfranco; Hanin, Yuri; Comani, Silvia; Robazza, Claudio

    2013-06-01

    The main purposes of the present study were to substantiate the existence of the four types of performance categories (i.e., optimal-automatic, optimal-controlled, suboptimal-controlled, and suboptimal-automatic) as hypothesised in the multi-action plan (MAP) model, and to investigate whether some specific affective, behavioural, psychophysiological, and postural trends may typify each type of performance. A 20-year-old athlete of the Italian shooting team, and a 46-year-old athlete of the Italian dart-throwing team participated in the study. Athletes were asked to identify the core components of the action and then to execute a large number of shots/flights. A 2 × 2 (optimal/suboptimal × automated/controlled) within subjects multivariate analysis of variance was performed to test the differences among the four types of performance. Findings provided preliminary evidence of psychophysiological and postural differences among four performance categories as conceptualized within the MAP model. Monitoring the entire spectrum of psychophysiological and behavioural features related to the different types of performance is important to develop and implement biofeedback and neurofeedback techniques aimed at helping athletes to identify individual zones of optimal functioning and to enhance their performance.

  15. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  16. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  17. Mapping AIS coverage for trusted surveillance

    NASA Astrophysics Data System (ADS)

    Lapinski, Anna-Liesa S.; Isenor, Anthony W.

    2010-10-01

    Automatic Identification System (AIS) is an unattended vessel reporting system developed for collision avoidance. Shipboard AIS equipment automatically broadcasts vessel positional data at regular intervals. The real-time position and identity data from a vessel is received by other vessels in the area thereby assisting with local navigation. As well, AIS broadcasts are beneficial to those concerned with coastal and harbour security. Land-based AIS receiving stations can also collect the AIS broadcasts. However, reception at the land station is dependent upon the ship's position relative to the receiving station. For AIS to be used as a trusted surveillance system, the characteristics of the AIS coverage area in the vicinity of the station (or stations) should be understood. This paper presents some results of a method being investigated at DRDC Atlantic, Canada) to map the AIS coverage characteristics of a dynamic AIS reception network. The method is shown to clearly distinguish AIS reception edges from those edges caused by vessel traffic patterns. The method can also be used to identify temporal changes in the coverage area, an important characteristic for local maritime security surveillance activities. Future research using the coverage estimate technique is also proposed to support surveillance activities.

  18. Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding

    NASA Astrophysics Data System (ADS)

    Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei

    The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.

  19. Semi-Automatic Terminology Generation for Information Extraction from German Chest X-Ray Reports.

    PubMed

    Krebs, Jonathan; Corovic, Hamo; Dietrich, Georg; Ertl, Max; Fette, Georg; Kaspar, Mathias; Krug, Markus; Stoerk, Stefan; Puppe, Frank

    2017-01-01

    Extraction of structured data from textual reports is an important subtask for building medical data warehouses for research and care. Many medical and most radiology reports are written in a telegraphic style with a concatenation of noun phrases describing the presence or absence of findings. Therefore a lexico-syntactical approach is promising, where key terms and their relations are recognized and mapped on a predefined standard terminology (ontology). We propose a two-phase algorithm for terminology matching: In the first pass, a local terminology for recognition is derived as close as possible to the terms used in the radiology reports. In the second pass, the local terminology is mapped to a standard terminology. In this paper, we report on an algorithm for the first step of semi-automatic generation of the local terminology and evaluate the algorithm with radiology reports of chest X-ray examinations from Würzburg university hospital. With an effort of about 20 hours work of a radiologist as domain expert and 10 hours for meetings, a local terminology with about 250 attributes and various value patterns was built. In an evaluation with 100 randomly chosen reports it achieved an F1-Score of about 95% for information extraction.

  20. Learning a detection map for a network of unattended ground sensors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Hung D.; Koch, Mark William

    2010-03-01

    We have developed algorithms to automatically learn a detection map of a deployed sensor field for a virtual presence and extended defense (VPED) system without apriori knowledge of the local terrain. The VPED system is an unattended network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has the ability to detect and classify moving targets at a limited range. By using a network of pods we can form a virtual perimeter with each pod responsible for a certain section of the perimeter. The site's geography and soil conditions can affect the detection performance of themore » pods. Thus, a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being installed at a site by a mobile deployment unit (MDU). The MDU will wear a GPS unit, so the system not only knows when it can detect the MDU, but also the MDU's location. In this paper, we demonstrate how to handle anisotropic sensor-configurations, geography, and soil conditions.« less

  1. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  2. GUMAP: A GUPIXWIN-compatible code for extracting regional spectra from nuclear microbeam list mode files

    NASA Astrophysics Data System (ADS)

    Russell, John L.; Campbell, John L.; Boyd, Nicholas I.; Dias, Johnny F.

    2018-02-01

    The newly developed GUMAP software creates element maps from OMDAQ list mode files, displays these maps individually or collectively, and facilitates on-screen definitions of specified regions from which a PIXE spectrum can be built. These include a free-hand region defined by moving the cursor. The regional charge is entered automatically into the spectrum file in a new GUPIXWIN-compatible format, enabling a GUPIXWIN analysis of the spectrum. The code defaults to the OMDAQ dead time treatment but also facilitates two other methods for dead time correction in sample regions with count rates different from the average.

  3. Mapping cumulative noise from shipping to inform marine spatial planning.

    PubMed

    Erbe, Christine; MacGillivray, Alexander; Williams, Rob

    2012-11-01

    Including ocean noise in marine spatial planning requires predictions of noise levels on large spatiotemporal scales. Based on a simple sound transmission model and ship track data (Automatic Identification System, AIS), cumulative underwater acoustic energy from shipping was mapped throughout 2008 in the west Canadian Exclusive Economic Zone, showing high noise levels in critical habitats for endangered resident killer whales, exceeding limits of "good conservation status" under the EU Marine Strategy Framework Directive. Error analysis proved that rough calculations of noise occurrence and propagation can form a basis for management processes, because spending resources on unnecessary detail is wasteful and delays remedial action.

  4. Schlumberger soundings near Medicine Lake, California

    USGS Publications Warehouse

    Zohdy, A.A.R.; Bisdorf, R.J.

    1990-01-01

    The use of direct current resistivity soundings to explore the geothermal potential of the Medicine Lake area in northern California proved to be challenging because of high contact resistances and winding roads. Deep Schlumberger soundings were made by expanding current electrode spacings along the winding roads. Corrected sounding data were interpreted using an automatic interpretation method. Forty-two maps of interpreted resistivity were calculated for depths extending from 20 to 1000 m. Computer animation of these 42 maps revealed that: 1) certain subtle anomalies migrate laterallly with depth and can be traced to their origin, 2) an extensive volume of low-resistivity material underlies the survey area, and 3) the three areas (east of Bullseye Lake, southwest of Glass Mountain, and northwest of Medicine Lake) may be favorable geothermal targets. Six interpreted resistivity maps and three cross-sections illustrate the above findings. -from Authors

  5. Generalized vegetation map of north Merrit Island based on a simplified multispectral analysis

    NASA Technical Reports Server (NTRS)

    Poonai, P.; Floyd, W. J.; Rahmani, M. A.

    1977-01-01

    A simplified system for classification of multispectral data was used for making a generalized map of ground features of North Merritt Island. Subclassification of vegetation within broad categories yielded promising results which led to a completely automatic method and to the production of satisfactory detailed maps. Changes in an area north of Happy Hammocks are evidently related to water relations of the soil and are not associated with the last winter freeze-damage which affected mainly the mangrove species, likely to reestablish themselves by natural processes. A supplementary investigation involving reflectance studies in the laboratory has shown that the reflectance by detached citrus leaves, of wavelengths lying between 400 microns and 700 microns, showed some variation over a period of seven days during which the leaves were kept in a laboratory atmosphere.

  6. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  7. Pathview: an R/Bioconductor package for pathway-based data integration and visualization.

    PubMed

    Luo, Weijun; Brouwer, Cory

    2013-07-15

    Pathview is a novel tool set for pathway-based data integration and visualization. It maps and renders user data on relevant pathway graphs. Users only need to supply their data and specify the target pathway. Pathview automatically downloads the pathway graph data, parses the data file, maps and integrates user data onto the pathway and renders pathway graphs with the mapped data. Although built as a stand-alone program, Pathview may seamlessly integrate with pathway and functional analysis tools for large-scale and fully automated analysis pipelines. The package is freely available under the GPLv3 license through Bioconductor and R-Forge. It is available at http://bioconductor.org/packages/release/bioc/html/pathview.html and at http://Pathview.r-forge.r-project.org/. luo_weijun@yahoo.com Supplementary data are available at Bioinformatics online.

  8. Identification and Mapping of Tree Species in Urban Areas Using WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Mustafa, Y. T.; Habeeb, H. N.; Stein, A.; Sulaiman, F. Y.

    2015-10-01

    Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2) imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO) using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.

  9. Evaluation of satellite remote sensing and automatic data techniques for characterization of wetlands and marshlands

    NASA Technical Reports Server (NTRS)

    Cartmill, R. H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Using the 12S Digicol color additive viewer and eight color classification map has been produced of a portion of the study area. Channel 3 of the MSS produced the best map. Enlargements of the MSS data have been accomplished by using the Data Analysis Station. The attached film recorder has three color guns which are capable of placing 2400 square elements across a 9 inch film. It has been found that by repeating ERTS element 9 times and each scan line 13 times that a map of a scale approximately 1:62,000 can be produced as a color negative film strip. This can be contact printed to produce a color map of the scale. As yet this procedure does not correct for image skew caused by rotation which is believed to be the major source of distortion and blockiness in the image. However, the final product which has not undergone any photographic enlargement is superior to photographically enlarged maps of the same scale.

  10. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  11. Mineral Potential in India Using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) Data

    NASA Astrophysics Data System (ADS)

    Oommen, T.; Chatterjee, S.

    2017-12-01

    NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.

  12. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  13. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    NASA Astrophysics Data System (ADS)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and calculations in an automatic fashion, except the need of a human interaction upon specific issues (such as the decision whether a measurement is an anomaly or not according to the detected temporal and spatial variations with contiguous points). The presented computer program runs from command line and shows peculiar characteristics in the cascade modelling within different contexts belonging to agriculture, phytopathology and environment. In particular, it can be a powerful tool to set up cutting-edge regional web services based on weather information. Indeed, it can support territorial agencies in charge of meteorological and phytopathological bulletins.

  14. Venus Quadrangle Geological Mapping: Use of Geoscience Data Visualization Systems in Mapping and Training

    NASA Technical Reports Server (NTRS)

    Head, James W.; Huffman, J. N.; Forsberg, A. S.; Hurwitz, D. M.; Basilevsky, A. T.; Ivanov, M. A.; Dickson, J. L.; Kumar, P. Senthil

    2008-01-01

    We are currently investigating new technological developments in computer visualization and analysis in order to assess their importance and utility in planetary geological analysis and mapping [1,2]. Last year we reported on the range of technologies available and on our application of these to various problems in planetary mapping [3]. In this contribution we focus on the application of these techniques and tools to Venus geological mapping at the 1:5M quadrangle scale. In our current Venus mapping projects we have utilized and tested the various platforms to understand their capabilities and assess their usefulness in defining units, establishing stratigraphic relationships, mapping structures, reaching consensus on interpretations and producing map products. We are specifically assessing how computer visualization display qualities (e.g., level of immersion, stereoscopic vs. monoscopic viewing, field of view, large vs. small display size, etc.) influence performance on scientific analysis and geological mapping. We have been exploring four different environments: 1) conventional desktops (DT), 2) semi-immersive Fishtank VR (FT) (i.e., a conventional desktop with head-tracked stereo and 6DOF input), 3) tiled wall displays (TW), and 4) fully immersive virtual reality (IVR) (e.g., "Cave Automatic Virtual Environment," or Cave system). Formal studies demonstrate that fully immersive Cave environments are superior to desktop systems for many tasks [e.g., 4].

  15. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  16. Farm-specific economic value of automatic lameness detection systems in dairy cattle: From concepts to operational simulations.

    PubMed

    Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig

    2018-01-01

    Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems

    NASA Astrophysics Data System (ADS)

    Hese, S.; Behrendt, F.

    2017-08-01

    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) - the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500 €. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.

  18. Comparison of Absolute Apparent Diffusion Coefficient (ADC) Values in ADC Maps Generated Across Different Postprocessing Software: Reproducibility in Endometrial Carcinoma.

    PubMed

    Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan

    2017-12-01

    Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.

  19. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  20. FOCIH: Form-Based Ontology Creation and Information Harvesting

    NASA Astrophysics Data System (ADS)

    Tao, Cui; Embley, David W.; Liddle, Stephen W.

    Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data—which some see as Web 3.0—is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.

  1. Space-Based Sensorweb Monitoring of Wildfires in Thailand

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doubleday, Joshua; Mclaren, David; Davies, Ashley; Tran, Daniel; Tanpipat, Veerachai; Akaakara, Siri; Ratanasuwan, Anuchit; Mandl, Daniel

    2011-01-01

    We describe efforts to apply sensorweb technologies to the monitoring of forest fires in Thailand. In this approach, satellite data and ground reports are assimilated to assess the current state of the forest system in terms of forest fire risk, active fires, and likely progression of fires and smoke plumes. This current and projected assessment can then be used to actively direct sensors and assets to best acquire further information. This process operates continually with new data updating models of fire activity leading to further sensing and updating of models. As the fire activity is tracked, products such as active fire maps, burn scar severity maps, and alerts are automatically delivered to relevant parties.We describe the current state of the Thailand Fire Sensorweb which utilizes the MODIS-based FIRMS system to track active fires and trigger Earth Observing One / Advanced Land Imager to acquire imagery and produce active fire maps, burn scar severity maps, and alerts. We describe ongoing work to integrate additional sensor sources and generate additional products.

  2. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    DOE PAGES

    Richard, Joshua; Galloway, Jack; Fensin, Michael; ...

    2015-04-04

    A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from themore » combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.« less

  3. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  4. Online, automatic, ionospheric maps: IRI-PLAS-MAP

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Sezen, U.; Gulyaeva, T. L.; Cilibas, O.

    2015-04-01

    Global and regional behavior of the ionosphere is an important component of space weather. The peak height and critical frequency of ionospheric layer for the maximum ionization, namely, hmF2 and foF2, and the total number of electrons on a ray path, Total Electron Content (TEC), are the most investigated and monitored values of ionosphere in capturing and observing ionospheric variability. Typically ionospheric models such as International Reference Ionosphere (IRI) can provide electron density profile, critical parameters of ionospheric layers and Ionospheric electron content for a given location, date and time. Yet, IRI model is limited by only foF2 STORM option in reflecting the dynamics of ionospheric/plasmaspheric/geomagnetic storms. Global Ionospheric Maps (GIM) are provided by IGS analysis centers for global TEC distribution estimated from ground-based GPS stations that can capture the actual dynamics of ionosphere and plasmasphere, but this service is not available for other ionospheric observables. In this study, a unique and original space weather service is introduced as IRI-PLAS-MAP from http://www.ionolab.org

  5. Application of satellite data and LARS's data processing techniques to mapping vegetation of the Dismal Swamp. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Messmore, J. A.

    1976-01-01

    The feasibility of using digital satellite imagery and automatic data processing techniques as a means of mapping swamp forest vegetation was considered, using multispectral scanner data acquired by the LANDSAT-1 satellite. The site for this investigation was the Dismal Swamp, a 210,000 acre swamp forest located south of Suffolk, Va. on the Virginia-North Carolina border. Two basic classification strategies were employed. The initial classification utilized unsupervised techniques which produced a map of the swamp indicating the distribution of thirteen forest spectral classes. These classes were later combined into three informational categories: Atlantic white cedar (Chamaecyparis thyoides), Loblolly pine (Pinus taeda), and deciduous forest. The subsequent classification employed supervised techniques which mapped Atlantic white cedar, Loblolly pine, deciduous forest, water and agriculture within the study site. A classification accuracy of 82.5% was produced by unsupervised techniques compared with 89% accuracy using supervised techniques.

  6. High-resolution imaging of SNR IC443 and W44 with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Egron, E.; Pellizzoni, A.; Iacolina, M. N.; Loru, S.; Marongiu, M.; Righini, S.; Cardillo, M.; Giuliani, A.; Mulas, S.; Murtas, G.; Simeone, D.

    2017-02-01

    We present single-dish imaging of the well-known Supernova Remnants (SNRs) IC443 and W44 at 1.5 GHz and 7 GHz with the recently commissioned 64-m diameter Sardinia Radio Telescope (SRT). Our images were obtained through on-the-fly mapping techniques, providing antenna beam oversampling, automatic baseline subtraction and radio-frequency interference removal. It results in high-quality maps of the SNRs at 7 GHz, which are usually lacking and not easily achievable through interferometry at this frequency due to the very large SNR structures. SRT continuum maps of our targets are consistent with VLA maps carried out at lower frequencies (at 324 MHz and 1.4 GHz), providing a view of the complex filamentary morphology. New estimates of the total flux density are given within 3% and 5% error at 1.5 GHz and 7 GHz respectively, in addition to flux measurements in different regions of the SNRs.

  7. Developing ShakeCast statistical fragility analysis framework for rapid post-earthquake assessment

    USGS Publications Warehouse

    Lin, K.-W.; Wald, D.J.

    2012-01-01

    When an earthquake occurs, the U. S. Geological Survey (USGS) ShakeMap estimates the extent of potentially damaging shaking and provides overall information regarding the affected areas. The USGS ShakeCast system is a freely-available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users’ facilities, sends notifications of potential damage to responsible parties, and generates facility damage assessment maps and other web-based products for emergency managers and responders. We describe notable improvements of the ShakeMap and the ShakeCast applications. We present a design for comprehensive fragility implementation, integrating spatially-varying ground-motion uncertainties into fragility curves for ShakeCast operations. For each facility, an overall inspection priority (or damage assessment) is assigned on the basis of combined component-based fragility curves using pre-defined logic. While regular ShakeCast users receive overall inspection priority designations for each facility, engineers can access the full fragility analyses for further evaluation.

  8. Comparison of manual and automatic techniques for substriatal segmentation in 11C-raclopride high-resolution PET studies.

    PubMed

    Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O

    2016-10-01

    The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.

  9. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  10. NeuroVault.org: A repository for sharing unthresholded statistical maps, parcellations, and atlases of the human brain.

    PubMed

    Gorgolewski, Krzysztof J; Varoquaux, Gael; Rivera, Gabriel; Schwartz, Yannick; Sochat, Vanessa V; Ghosh, Satrajit S; Maumet, Camille; Nichols, Thomas E; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S; Poldrack, Russell A

    2016-01-01

    NeuroVault.org is dedicated to storing outputs of analyses in the form of statistical maps, parcellations and atlases, a unique strategy that contrasts with most neuroimaging repositories that store raw acquisition data or stereotaxic coordinates. Such maps are indispensable for performing meta-analyses, validating novel methodology, and deciding on precise outlines for regions of interest (ROIs). NeuroVault is open to maps derived from both healthy and clinical populations, as well as from various imaging modalities (sMRI, fMRI, EEG, MEG, PET, etc.). The repository uses modern web technologies such as interactive web-based visualization, cognitive decoding, and comparison with other maps to provide researchers with efficient, intuitive tools to improve the understanding of their results. Each dataset and map is assigned a permanent Universal Resource Locator (URL), and all of the data is accessible through a REST Application Programming Interface (API). Additionally, the repository supports the NIDM-Results standard and has the ability to parse outputs from popular FSL and SPM software packages to automatically extract relevant metadata. This ease of use, modern web-integration, and pioneering functionality holds promise to improve the workflow for making inferences about and sharing whole-brain statistical maps. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  12. Summer Institute in Biomedical Engineering, 1973

    NASA Technical Reports Server (NTRS)

    Deloatch, E. M.; Coble, A. J.

    1974-01-01

    Bioengineering of medical equipment is detailed. Equipment described includes: an environmental control system for a surgical suite; surface potential mapping for an electrode system; the use of speech-modulated-white-noise to differentiate hearers and feelers among the profoundly deaf; the design of an automatic weight scale for an isolette; and an internal tibial torsion correction study. Graphs and charts are included with design specifications of this equipment.

  13. Real-time monitoring of clinical processes using complex event processing and transition systems.

    PubMed

    Meinecke, Sebastian

    2014-01-01

    Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.

  14. The application of automatic recognition techniques in the Apollo 9 SO-65 experiment

    NASA Technical Reports Server (NTRS)

    Macdonald, R. B.

    1970-01-01

    A synoptic feature analysis is reported on Apollo 9 remote earth surface photographs that uses the methods of statistical pattern recognition to classify density points and clusterings in digital conversion of optical data. A computer derived geological map of a geological test site indicates that geological features of the range are separable, but that specific rock types are not identifiable.

  15. ADP of multispectral scanner data for land use mapping

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M.

    1971-01-01

    The advantages and disadvantages of various remote sensing instrumentation and analysis techniques are reviewed. The use of multispectral scanner data and the automatic data processing techniques are considered. A computer-aided analysis system for remote sensor data is described with emphasis on the image display, statistics processor, wavelength band selection, classification processor, and results display. Advanced techniques in using spectral and temporal data are also considered.

  16. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  17. Medical technology advances from space research.

    NASA Technical Reports Server (NTRS)

    Pool, S. L.

    1971-01-01

    NASA-sponsored medical R & D programs for space applications are reviewed with particular attention to the benefits of these programs to earthbound medical services and to the general public. Notable among the results of these NASA programs is an integrated medical laboratory equipped with numerous advanced systems such as digital biotelemetry and automatic visual field mapping systems, sponge electrode caps for electroencephalograms, and sophisticated respiratory analysis equipment.

  18. Medical technology advances from space research

    NASA Technical Reports Server (NTRS)

    Pool, S. L.

    1972-01-01

    Details of medical research and development programs, particularly an integrated medical laboratory, as derived from space technology are given. The program covers digital biotelemetry systems, automatic visual field mapping equipment, sponge electrode caps for clinical electroencephalograms, and advanced respiratory analysis equipment. The possibility of using the medical laboratory in ground based remote areas and regional health care facilities, as well as long duration space missions is discussed.

  19. Three-Dimensional Computer Graphics Brain-Mapping Project.

    DTIC Science & Technology

    1987-03-15

    NEUROQUANT . This package was directed towards quantitative microneuroanatomic data acquisition and analysis. Using this interface, image frames captured...populations of brains. This would have been aprohibitive task if done manually with a densitometer and film, due to user error and bias. NEUROQUANT functioned...of cells were of interest. NEUROQUANT is presently being implemented with a more fully automatic method of localizing the cell bodies directly

  20. Rheticus: a cloud-based Geo-Information Service for the Detection and Monitoring of Geohazards and Infrastructural Instabilities

    NASA Astrophysics Data System (ADS)

    Chiaradia, M. T.; Samarelli, S.; Massimi, V.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2017-12-01

    Geospatial information is today essential for organizations and professionals working in several industries. More and more, huge information is collected from multiple data sources and is freely available to anyone as open data. Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and, if appropriate, a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (Service-Oriented-Architecture) model. Due to its spread architecture, where every functionality is defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. This approach makes the system very flexible with respect to the services implementation, ensuring the ability to rethink and redesign the whole process with little effort. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (MTInSAR), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub. S1 data are then processed by SPINUA (Stable Point Interferometry even in Unurbanized Areas), a robust MTInSAR algorithm, which is responsible of producing displacement maps immediately usable to measure movements of point and distributed scatterers, with sub-centimetric precision. We outline the automatic generation process of displacement maps and we provide examples of the detection and monitoring of geohazard and infrastructure instabilities. ACK: Rheticus® is a registered trademark of Planetek Italia srl. Study carried out in the framework of the FAST4MAP project (ASI Contract n. 2015-020-R.0). Sentinel-1A products provided by ESA.

  1. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    NASA Astrophysics Data System (ADS)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial extent and are ideal to use in further environmental time series analyzes, production of statistical indexes (frequency, geographical distribution and number of fires per prefecture) and applications, including change detection and climate change models, urban planning, correlation with manmade activities, etc.

  2. An automatic gore panel mapping system

    NASA Technical Reports Server (NTRS)

    Shiver, John D.; Phelps, Norman N.

    1990-01-01

    The Automatic Gore Mapping System is being developed to reduce the time and labor costs associated with manufacturing the External Tank. The present chem-milling processes and procedures are discussed. The down loading of the simulation of the system has to be performed to verify that the simulation package will translate the simulation code into robot code. Also a simulation of this system has to be programmed for a gantry robot instead of the articulating robot that is presently in the system. It was discovered using the simulation package that the articulation robot cannot reach all the point on some of the panels, therefore when the system is ready for production, a gantry robot will be used. Also a hydrosensor system is being developed to replace the point-to-point contact probe. The hydrosensor will allow the robot to perform a non-contact continuous scan of the panel. It will also provide a faster scan of the panel because it will eliminate the in-and-out movement required for the present end effector. The system software is currently being modified so that the hydrosensor will work with the system. The hydrosensor consists of a Krautkramer-Branson transducer encased in a plexiglass nozzle. The water stream pumped through the nozzle is the couplant for the probe. Also, software is being written so that the robot will have the ability to draw the contour lines on the panel displaying the out-of-tolerance regions. Presently the contour lines can only be displayed on the computer screens. Research is also being performed on improving and automating the method of scribing the panels. Presently the panels are manually scribed with a sharp knife. The use of a low power laser or water jet is being studied as a method of scribing the panels. The contour drawing pen will be replaced with scribing tool and the robot will then move along the contour lines. With these developments the Automatic Gore Mapping Systems will provide a reduction in time and labor costs associated with manufacturing the External Task. The system also has the potential of inspecting other manufactured parts.

  3. Unsupervised Change Detection for Geological and Ecological Monitoring via Remote Sensing: Application on a Volcanic Area

    NASA Astrophysics Data System (ADS)

    Falco, N.; Pedersen, G. B. M.; Vilmunandardóttir, O. K.; Belart, J. M. M. C.; Sigurmundsson, F. S.; Benediktsson, J. A.

    2016-12-01

    The project "Environmental Mapping and Monitoring of Iceland by Remote Sensing (EMMIRS)" aims at providing fast and reliable mapping and monitoring techniques on a big spatial scale with a high temporal resolution of the Icelandic landscape. Such mapping and monitoring will be crucial to both mitigate and understand the scale of processes and their often complex interlinked feedback mechanisms.In the EMMIRS project, the Hekla volcano area is one of the main sites under study, where the volcanic eruptions, extreme weather and human activities had an extensive impact on the landscape degradation. The development of innovative remote sensing approaches to compute earth observation variables as automatically as possible is one of the main tasks of the EMMIRS project. Furthermore, a temporal remote sensing archive is created and composed by images acquired by different sensors (Landsat, RapidEye, ASTER and SPOT5). Moreover, historical aerial stereo photos allowed decadal reconstruction of the landscape by reconstruction of digital elevation models. Here, we propose a novel architecture for automatic unsupervised change detection analysis able to ingest multi-source data in order to detect landscape changes in the Hekla area. The change detection analysis is based on multi-scale analysis, which allows the identification of changes at different level of abstraction, from pixel-level to region-level. For this purpose, operators defined in mathematical morphology framework are implemented to model the contextual information, represented by the neighbour system of a pixel, allowing the identification of changes related to both geometrical and spectral domains. Automatic radiometric normalization strategy is also implemented as pre-processing step, aiming at minimizing the effect of different acquisition conditions. The proposed architecture is tested on multi-temporal data sets acquired over different time periods coinciding with the last three eruptions (1980-1981, 1991, 2000) occurred on Hekla volcano. The results reveal emplacement of new lava flows and the initial vegetation succession, providing insightful information on the evolving of vegetation in such environment. Shadow and snow patch changes are resolved in post-processing by exploiting the available spectral information.

  4. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  5. Evaluation of the fidelity of feature descriptor-based specimen tracking for automatic NDE data integration

    NASA Astrophysics Data System (ADS)

    Radkowski, Rafael; Holland, Stephen; Grandin, Robert

    2018-04-01

    This research addresses inspection location tracking in the field of nondestructive evaluation (NDE) using a computer vision technique to determine the position and orientation of typical NDE equipment in a test setup. The objective is the tracking accuracy for typical NDE equipment to facilitate automatic NDE data integration. Since the employed tracking technique relies on surface curvatures of an object of interest, the accuracy can be only experimentally determined. We work with flash-thermography and conducted an experiment in which we tracked a specimen and a thermography flash hood, measured the spatial relation between both, and used the relation as input to map thermography data onto a 3D model of the specimen. The results indicate an appropriate accuracy, however, unveiled calibration challenges.

  6. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  7. Automatic rebuilding and optimization of crystallographic structures in the Protein Data Bank

    PubMed Central

    Joosten, Robbie P.; Joosten, Krista; Cohen, Serge X.; Vriend, Gert; Perrakis, Anastassis

    2011-01-01

    Motivation: Macromolecular crystal structures in the Protein Data Bank (PDB) are a key source of structural insight into biological processes. These structures, some >30 years old, were constructed with methods of their era. With PDB_REDO, we aim to automatically optimize these structures to better fit their corresponding experimental data, passing the benefits of new methods in crystallography on to a wide base of non-crystallographer structure users. Results: We developed new algorithms to allow automatic rebuilding and remodeling of main chain peptide bonds and side chains in crystallographic electron density maps, and incorporated these and further enhancements in the PDB_REDO procedure. Applying the updated PDB_REDO to the oldest, but also to some of the newest models in the PDB, corrects existing modeling errors and brings these models to a higher quality, as judged by standard validation methods. Availability and Implementation: The PDB_REDO database and links to all software are available at http://www.cmbi.ru.nl/pdb_redo. Contact: r.joosten@nki.nl; a.perrakis@nki.nl Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22034521

  8. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  9. Processing of Crawled Urban Imagery for Building Use Classification

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Haala, N.

    2017-05-01

    Recent years have shown a shift from pure geometric 3D city models to data with semantics. This is induced by new applications (e.g. Virtual/Augmented Reality) and also a requirement for concepts like Smart Cities. However, essential urban semantic data like building use categories is often not available. We present a first step in bridging this gap by proposing a pipeline to use crawled urban imagery and link it with ground truth cadastral data as an input for automatic building use classification. We aim to extract this city-relevant semantic information automatically from Street View (SV) imagery. Convolutional Neural Networks (CNNs) proved to be extremely successful for image interpretation, however, require a huge amount of training data. Main contribution of the paper is the automatic provision of such training datasets by linking semantic information as already available from databases provided from national mapping agencies or city administrations to the corresponding façade images extracted from SV. Finally, we present first investigations with a CNN and an alternative classifier as a proof of concept.

  10. The Sargassum Early Advisory System (SEAS)

    NASA Astrophysics Data System (ADS)

    Armstrong, D.; Gallegos, S. C.

    2016-02-01

    The Sargassum Early Advisory System (SEAS) web-app was designed to automatically detect Sargassum at sea, forecast movement of the seaweed, and alert users of potential landings. Inspired to help address the economic hardships caused by large landings of Sargassum, the web app automates and enhances the manual tasks conducted by the SEAS group of Texas A&M University at Galveston. The SEAS web app is a modular, mobile-friendly tool that automates the entire workflow from data acquisition to user management. The modules include: 1) an Imagery Retrieval Module to automatically download Landsat-8 Operational Land Imagery (OLI) from the United States Geological Survey (USGS), 2) a Processing Module for automatic detection of Sargassum in the OLI imagery, and subsequent mapping of theses patches in the HYCOM grid, producing maps that show Sargassum clusters; 3) a Forecasting engine fed by the HYbrid Coordinate Ocean Model (HYCOM) model currents and winds from weather buoys; and 4) a mobile phone optimized geospatial user interface. The user can view the last known position of Sargassum clusters, trajectory and location projections for the next 24, 72 and 168 hrs. Users can also subscribe to alerts generated for particular areas. Currently, the SEAS web app produces advisories for Texas beaches. The forecasted Sargassum landing locations are validated by reports from Texas beach managers. However, the SEAS web app was designed to easily expand to other areas, and future plans call for extending the SEAS web app to Mexico and the Caribbean islands. The SEAS web app development is led by NASA, with participation by ASRC Federal/Computer Science Corporation, and the Naval Research Laboratory, all at Stennis Space Center, and Texas A&M University at Galveston.

  11. Apollo: AN Automatic Procedure to Forecast Transport and Deposition of Tephra

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2007-05-01

    Volcanic ash fallout represents a serious threat to communities around active volcanoes. Reliable short term predictions constitute a valuable support for to mitigate the effects of fallout on the surrounding area during an episode of crisis. We present a platform-independent automatic procedure aimed to daily forecast volcanic ash dispersal. The procedure builds on a series of programs and interfaces that allow an automatic data/results flow. Firstly the procedure downloads mesoscale meteorological forecasts for the region and period of interest, filters and converts data from its native format (typically GRIB format files), and sets up the CALMET diagnostic meteorological model to obtain hourly wind field and micro-meteorological variables on a finer mesh. Secondly a 1-D version of the buoyant plume equations assesses the distribution of mass along the eruptive column depending on the obtained wind field and on the conditions at the vent (granulometry, mass flow rate, etc.). All these data are used as input for the ash dispersion model(s). Any model able to face physical complexity and coupling processes with adequate solving times can be plugged into the system by means of an interface. Currently, the procedure contains the models HAZMAP, TEPHRA and FALL3D, the latter in both serial and parallel versions. Parallelization of FALL3D is done at two levels one for particle classes and one for spatial domain. The last step is to post-processes the model(s) outcomes to end up with homogeneous maps written on portable format files. Maps plot relevant quantities such as predicted ground load, expected deposit thickness or visual and flight safety concentration thresholds. Several applications are shown as examples.

  12. Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.

    PubMed

    Zhao, Yu; Ge, Fangfei; Liu, Tianming

    2018-07-01

    fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Musical space synesthesia: automatic, explicit and conceptual connections between musical stimuli and space.

    PubMed

    Akiva-Kabiri, Lilach; Linkovski, Omer; Gertner, Limor; Henik, Avishai

    2014-08-01

    In musical-space synesthesia, musical pitches are perceived as having a spatially defined array. Previous studies showed that symbolic inducers (e.g., numbers, months) can modulate response according to the inducer's relative position on the synesthetic spatial form. In the current study we tested two musical-space synesthetes and a group of matched controls on three different tasks: musical-space mapping, spatial cue detection and a spatial Stroop-like task. In the free mapping task, both synesthetes exhibited a diagonal organization of musical pitch tones rising from bottom left to the top right. This organization was found to be consistent over time. In the subsequent tasks, synesthetes were asked to ignore an auditory or visually presented musical pitch (irrelevant information) and respond to a visual target (i.e., an asterisk) on the screen (relevant information). Compatibility between musical pitch and the target's spatial location was manipulated to be compatible or incompatible with the synesthetes' spatial representations. In the spatial cue detection task participants had to press the space key immediately upon detecting the target. In the Stroop-like task, they had to reach the target by using a mouse cursor. In both tasks, synesthetes' performance was modulated by the compatibility between irrelevant and relevant spatial information. Specifically, the target's spatial location conflicted with the spatial information triggered by the irrelevant musical stimulus. These results reveal that for musical-space synesthetes, musical information automatically orients attention according to their specific spatial musical-forms. The present study demonstrates the genuineness of musical-space synesthesia by revealing its two hallmarks-automaticity and consistency. In addition, our results challenge previous findings regarding an implicit vertical representation for pitch tones in non-synesthete musicians. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Intelligent geocoding system to locate traffic crashes.

    PubMed

    Qin, Xiao; Parker, Steven; Liu, Yi; Graettinger, Andrew J; Forde, Susie

    2013-01-01

    State agencies continue to face many challenges associated with new federal crash safety and highway performance monitoring requirements that use data from multiple and disparate systems across different platforms and locations. On a national level, the federal government has a long-term vision for State Departments of Transportation (DOTs) to report state route and off-state route crash data in a single network. In general, crashes occurring on state-owned or state maintained highways are a priority at the Federal and State level; therefore, state-route crashes are being geocoded by state DOTs. On the other hand, crashes occurring on off-state highway system do not always get geocoded due to limited resources and techniques. Creating and maintaining a statewide crash geographic information systems (GIS) map with state route and non-state route crashes is a complicated and expensive task. This study introduces an automatic crash mapping process, Crash-Mapping Automation Tool (C-MAT), where an algorithm translates location information from a police report crash record to a geospatial map and creates a pinpoint map for all crashes. The algorithm has approximate 83 percent mapping rate. An important application of this work is the ability to associate the mapped crash records to underlying business data, such as roadway inventory and traffic volumes. The integrated crash map is the foundation for effective and efficient crash analyzes to prevent highway crashes. Published by Elsevier Ltd.

  15. Building the atomic model of a boreal lake virus of unknown fold in a 3.9 Å cryo-EM map.

    PubMed

    De Colibus, Luigi; Stuart, David I

    2018-04-01

    We report here the protocol adopted to build the atomic model of the newly discovered virus FLiP (Flavobacterium infecting, lipid-containing phage) into 3.9 Å cryo-electron microscopy (cryo-EM) maps. In particular, this report discusses the combination of density modification procedures, automatic model building and bioinformatics tools applied to guide the tracing of the major capsid protein (MCP) of this virus. The protocol outlined here may serve as a reference for future structural determination by cryo-EM of viruses lacking detectable structural homologues. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Proving refinement transformations using extended denotational semantics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, V.L.; Boyle, J.M.

    1996-04-01

    TAMPR is a fully automatic transformation system based on syntactic rewrites. Our approach in a correctness proof is to map the transformation into an axiomatized mathematical domain where formal (and automated) reasoning can be performed. This mapping is accomplished via an extended denotational semantic paradigm. In this approach, the abstract notion of a program state is distributed between an environment function and a store function. Such a distribution introduces properties that go beyond the abstract state that is being modeled. The reasoning framework needs to be aware of these properties in order to successfully complete a correctness proof. This papermore » discusses some of our experiences in proving the correctness of TAMPR transformations.« less

  17. [Toxocariasis in the Republic of Altai. Geoinformation mapping simulation].

    PubMed

    Pautova, E A; Kurepina, N Iu; Dovgalev, A S

    2012-01-01

    Toxocariasis is one of the most important zooanthroponotic natural-focal parasitic diseases in the Republic of Altai. The prevalence of their invasion among the inhabitants of the Republic has increased by more than 7 times. The data of the authors' observations ofToxocara infection in animals (cats, dogs), soil contamination with helminth eggs, and prevalence of human toxocariasis in the Republic of Altai, by considering the results of tests for antibodies against its pathogen in the inhabitants of the region, were automatically processed using geoinformation mapping simulation, which yielded a mapping model to rank the region's area by morbidity rates. The use of up-to-date computers and geo information systems makes it possible to systematize information on this invasion and to see major foci of the disease to reveal the reasons for their assignment to the specific type of the region's landscape.

  18. Flight demonstration of integrated airport surface automation concepts

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Young, Steven D.

    1995-01-01

    A flight demonstration was conducted to address airport surface movement area capacity issues by providing pilots with enhanced situational awareness information. The demonstration showed an integration of several technologies to government and industry representatives. These technologies consisted of an electronic moving map display in the cockpit, a Differential Global Positioning System (DGPS) receiver, a high speed VHF data link, an ASDE-3 radar, and the Airport Movement Area Safety System (AMASS). Aircraft identification was presented to an air traffic controller on AMASS. The onboard electronic map included the display of taxi routes, hold instructions, and clearances, which were sent to the aircraft via data link by the controller. The map also displayed the positions of other traffic and warning information, which were sent to the aircraft automatically from the ASDE-3/AMASS system. This paper describes the flight demonstration in detail, along with preliminary results.

  19. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  20. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.

    2018-04-01

    ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.

Top