Sample records for mapmi automated mapping

  1. Automated grain mapping using wide angle convergent beam electron diffraction in transmission electron microscope for nanomaterials.

    PubMed

    Kumar, Vineet

    2011-12-01

    The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.

  2. Automating lexical cross-mapping of ICNP to SNOMED CT.

    PubMed

    Kim, Tae Youn

    2016-01-01

    The purpose of this study was to examine the feasibility of automating lexical cross-mapping of a logic-based nursing terminology (ICNP) to SNOMED CT using the Unified Medical Language System (UMLS) maintained by the U.S. National Library of Medicine. A two-stage approach included patterns identification, and application and evaluation of an automated term matching procedure. The performance of the automated procedure was evaluated using a test set against a gold standard (i.e. concept equivalency table) created independently by terminology experts. There were lexical similarities between ICNP diagnostic concepts and SNOMED CT. The automated term matching procedure was reliable as presented in recall of 65%, precision of 79%, accuracy of 82%, F-measure of 0.71 and the area under the receiver operating characteristics (ROC) curve of 0.78 (95% CI 0.73-0.83). When the automated procedure was not able to retrieve lexically matched concepts, it was also unlikely for terminology experts to identify a matched SNOMED CT concept. Although further research is warranted to enhance the automated matching procedure, the combination of cross-maps from UMLS and the automated procedure is useful to generate candidate mappings and thus, assist ongoing maintenance of mappings which is a significant burden to terminology developers.

  3. ActionMap: A web-based software that automates loci assignments to framework maps.

    PubMed

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  4. ActionMap: a web-based software that automates loci assignments to framework maps

    PubMed Central

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-01-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426

  5. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  6. Automated T2-mapping of the Menisci From Magnetic Resonance Images in Patients with Acute Knee Injury.

    PubMed

    Paproki, Anthony; Engstrom, Craig; Strudwick, Mark; Wilson, Katharine J; Surowiec, Rachel K; Ho, Charles; Crozier, Stuart; Fripp, Jurgen

    2017-10-01

    This study aimed to evaluate the accuracy of an automated method for segmentation and T2 mapping of the medial meniscus (MM) and lateral meniscus (LM) in clinical magnetic resonance images from patients with acute knee injury. Eighty patients scheduled for surgery of an anterior cruciate ligament or meniscal injury underwent magnetic resonance imaging of the knee (multiplanar two-dimensional [2D] turbo spin echo [TSE] or three-dimensional [3D]-TSE examinations, T2 mapping). Each meniscus was automatically segmented from the 2D-TSE (composite volume) or 3D-TSE images, auto-partitioned into anterior, mid, and posterior regions, and co-registered onto the T2 maps. The Dice similarity index (spatial overlap) was calculated between automated and manual segmentations of 2D-TSE (15 patients), 3D-TSE (16 patients), and corresponding T2 maps (31 patients). Pearson and intraclass correlation coefficients (ICC) were calculated between automated and manual T2 values. T2 values were compared (Wilcoxon rank sum tests) between torn and non-torn menisci for the subset of patients with both manual and automated segmentations to compare statistical outcomes of both methods. The Dice similarity index values for the 2D-TSE, 3D-TSE, and T2 map volumes, respectively, were 76.4%, 84.3%, and 75.2% for the MM and 76.4%, 85.1%, and 76.1% for the LM. There were strong correlations between automated and manual T2 values (r MM  = 0.95, ICC MM  = 0.94; r LM  = 0.97, ICC LM  = 0.97). For both the manual and the automated methods, T2 values were significantly higher in torn than in non-torn MM for the full meniscus and its subregions (P < .05). Non-torn LM had higher T2 values than non-torn MM (P < .05). The present automated method offers a promising alternative to manual T2 mapping analyses of the menisci and a considerable advance for integration into clinical workflows. Copyright © 2017 The Association of University Radiologists. All rights reserved.

  7. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  8. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures.

    PubMed

    Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M

    2013-11-15

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    NASA Technical Reports Server (NTRS)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  10. Comparison of CT perfusion summary maps to early diffusion-weighted images in suspected acute middle cerebral artery stroke.

    PubMed

    Benson, John; Payabvash, Seyedmehdi; Salazar, Pascal; Jagadeesan, Bharathi; Palmer, Christopher S; Truwit, Charles L; McKinney, Alexander M

    2015-04-01

    To assess the accuracy and reliability of one vendor's (Vital Images, Toshiba Medical, Minnetonka, MN) automated CT perfusion (CTP) summary maps in identification and volume estimation of infarcted tissue in patients with acute middle cerebral artery (MCA) distribution infarcts. From 1085 CTP examinations over 5.5 years, 43 diffusion-weighted imaging (DWI)-positive patients were included who underwent both CTP and DWI <12 h after symptom onset, with another 43 age-matched patients as controls (DWI-negative). Automated delay-corrected postprocessing software (DC-SVD) generated both infarct "core only" and "core+penumbra" CTP summary maps. Three reviewers independently tabulated Alberta Stroke Program Early CT scores (ASPECTS) of both CTP summary maps and coregistered DWI. Of 86 included patients, 36 had DWI infarct volumes ≤70 ml, 7 had volumes >70 ml, and 43 were negative; the automated CTP "core only" map correctly classified each as >70 ml or ≤70 ml, while the "core+penumbra" map misclassified 4 as >70 ml. There were strong correlations between DWI volume with both summary map-based volumes: "core only" (r=0.93), and "core+penumbra" (r=0.77) (both p<0.0001). Agreement between ASPECTS scores of infarct core on DWI with summary maps was 0.65-0.74 for "core only" map, and 0.61-0.65 for "core+penumbra" (both p<0.0001). Using DWI-based ASPECTS scores as the standard, the accuracy of the CTP-based maps were 79.1-86.0% for the "core only" map, and 83.7-88.4% for "core+penumbra." Automated CTP summary maps appear to be relatively accurate in both the detection of acute MCA distribution infarcts, and the discrimination of volumes using a 70 ml threshold. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  12. EFFECTS OF IMPROVED PRECIPITATION ESTIMATES ON AUTOMATED RUNOFF MAPPING: EASTERN UNITED STATES

    EPA Science Inventory

    We evaluated maps of runoff created by means of two automated procedures. We implemented each procedure using precipitation estimates of both 5-km and 10-km resolution from PRISM (Parameter-elevation Regressions on Independent Slopes Model). Our goal was to determine if using the...

  13. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures

    PubMed Central

    Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.

    2013-01-01

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. PMID:23769915

  14. Automated mapping of impervious surfaces in urban and suburban areas: Linear spectral unmixing of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jian; He, Yuhong

    2017-02-01

    Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.

  15. Knowledge maps: a tool for online assessment with automated feedback.

    PubMed

    Ho, Veronica W; Harris, Peter G; Kumar, Rakesh K; Velan, Gary M

    2018-12-01

    In higher education, most assessments or examinations comprise either multiple-choice items or open-ended questions such as modified essay questions (MEQs). Online concept and knowledge maps are potential tools for assessment, which might emphasize meaningful, integrated understanding of phenomena. We developed an online knowledge-mapping assessment tool, which provides automated feedback on student-submitted maps. We conducted a pilot study to investigate the potential utility of online knowledge mapping as a tool for automated assessment by comparing the scores generated by the software with manual grading of a MEQ on the same topic for a cohort of first-year medical students. In addition, an online questionnaire was used to gather students' perceptions of the tool. Map items were highly discriminating between students of differing knowledge of the topic overall. Regression analysis showed a significant correlation between map scores and MEQ scores, and responses to the questionnaire regarding use of knowledge maps for assessment were overwhelmingly positive. These results suggest that knowledge maps provide a similar indication of students' understanding of a topic as a MEQ, with the advantage of instant, consistent computer grading and time savings for educators. Online concept and knowledge maps could be a useful addition to the assessment repertoire in higher education.

  16. Application of automated multispectral analysis to Delaware's coastal vegetation mapping

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Daiber, F.; Bartlett, D.; Crichton, O.; Fornes, A.

    1973-01-01

    A baseline mapping project was undertaken in Delaware's coastal wetlands as a prelude to an evaluation of the relative value of different parcels of marsh and the setting of priorities for use of these marshes. A description of Delaware's wetlands is given and a mapping approach is discussed together with details concerning an automated analysis. The precision and resolution of the analysis was limited primarily by the quality of the imagery used.

  17. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  18. Improving automated disturbance maps using snow-covered landsat time series stacks

    Treesearch

    Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen

    2012-01-01

    Snow-covered winter Landsat time series stacks are used to develop a nonforest mask to enhance automated disturbance maps produced by the Vegetation Change Tracker (VCT). This method exploits the enhanced spectral separability between forested and nonforested areas that occurs with sufficient snow cover. This method resulted in significant improvements in Vegetation...

  19. An automated approach for mapping persistent ice and snow cover over high latitude regions

    USGS Publications Warehouse

    Selkowitz, David J.; Forster, Richard R.

    2016-01-01

    We developed an automated approach for mapping persistent ice and snow cover (glaciers and perennial snowfields) from Landsat TM and ETM+ data across a variety of topography, glacier types, and climatic conditions at high latitudes (above ~65°N). Our approach exploits all available Landsat scenes acquired during the late summer (1 August–15 September) over a multi-year period and employs an automated cloud masking algorithm optimized for snow and ice covered mountainous environments. Pixels from individual Landsat scenes were classified as snow/ice covered or snow/ice free based on the Normalized Difference Snow Index (NDSI), and pixels consistently identified as snow/ice covered over a five-year period were classified as persistent ice and snow cover. The same NDSI and ratio of snow/ice-covered days to total days thresholds applied consistently across eight study regions resulted in persistent ice and snow cover maps that agreed closely in most areas with glacier area mapped for the Randolph Glacier Inventory (RGI), with a mean accuracy (agreement with the RGI) of 0.96, a mean precision (user’s accuracy of the snow/ice cover class) of 0.92, a mean recall (producer’s accuracy of the snow/ice cover class) of 0.86, and a mean F-score (a measure that considers both precision and recall) of 0.88. We also compared results from our approach to glacier area mapped from high spatial resolution imagery at four study regions and found similar results. Accuracy was lowest in regions with substantial areas of debris-covered glacier ice, suggesting that manual editing would still be required in these regions to achieve reasonable results. The similarity of our results to those from the RGI as well as glacier area mapped from high spatial resolution imagery suggests it should be possible to apply this approach across large regions to produce updated 30-m resolution maps of persistent ice and snow cover. In the short term, automated PISC maps can be used to rapidly identify areas where substantial changes in glacier area have occurred since the most recent conventional glacier inventories, highlighting areas where updated inventories are most urgently needed. From a longer term perspective, the automated production of PISC maps represents an important step toward fully automated glacier extent monitoring using Landsat or similar sensors.

  20. A universal method for automated gene mapping

    PubMed Central

    Zipperlen, Peder; Nairz, Knud; Rimann, Ivo; Basler, Konrad; Hafen, Ernst; Hengartner, Michael; Hajnal, Alex

    2005-01-01

    Small insertions or deletions (InDels) constitute a ubiquituous class of sequence polymorphisms found in eukaryotic genomes. Here, we present an automated high-throughput genotyping method that relies on the detection of fragment-length polymorphisms (FLPs) caused by InDels. The protocol utilizes standard sequencers and genotyping software. We have established genome-wide FLP maps for both Caenorhabditis elegans and Drosophila melanogaster that facilitate genetic mapping with a minimum of manual input and at comparatively low cost. PMID:15693948

  1. Satellite freeze forecast system: Executive summary

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    A satellite-based temperature monitoring and prediction system consisting of a computer controlled acquisition, processing, and display system and the ten automated weather stations called by that computer was developed and transferred to the national weather service. This satellite freeze forecasting system (SFFS) acquires satellite data from either one of two sources, surface data from 10 sites, displays the observed data in the form of color-coded thermal maps and in tables of automated weather station temperatures, computes predicted thermal maps when requested and displays such maps either automatically or manually, archives the data acquired, and makes comparisons with historical data. Except for the last function, SFFS handles these tasks in a highly automated fashion if the user so directs. The predicted thermal maps are the result of two models, one a physical energy budget of the soil and atmosphere interface and the other a statistical relationship between the sites at which the physical model predicts temperatures and each of the pixels of the satellite thermal map.

  2. An automated approach to mapping corn from Landsat imagery

    USGS Publications Warehouse

    Maxwell, S.K.; Nuckols, J.R.; Ward, M.H.; Hoffer, R.M.

    2004-01-01

    Most land cover maps generated from Landsat imagery involve classification of a wide variety of land cover types, whereas some studies may only need spatial information on a single cover type. For example, we required a map of corn in order to estimate exposure to agricultural chemicals for an environmental epidemiology study. Traditional classification techniques, which require the collection and processing of costly ground reference data, were not feasible for our application because of the large number of images to be analyzed. We present a new method that has the potential to automate the classification of corn from Landsat satellite imagery, resulting in a more timely product for applications covering large geographical regions. Our approach uses readily available agricultural areal estimates to enable automation of the classification process resulting in a map identifying land cover as ‘highly likely corn,’ ‘likely corn’ or ‘unlikely corn.’ To demonstrate the feasibility of this approach, we produced a map consisting of the three corn likelihood classes using a Landsat image in south central Nebraska. Overall classification accuracy of the map was 92.2% when compared to ground reference data.

  3. Cooperative Mapping for Automated Vehicles

    DOT National Transportation Integrated Search

    2017-10-01

    Localization is essential for automated vehicles, even for simple tasks such as lanekeeping. Some automated vehicle systems use their sensors to perceive their surroundings on-the-fly, such as the early variants of the Tesla Autopilot, while others s...

  4. Snow-covered Landsat time series stacks improve automated disturbance mapping accuracy in forested landscapes

    Treesearch

    Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy B. Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen

    2011-01-01

    Accurate landscape-scale maps of forests and associated disturbances are critical to augment studies on biodiversity, ecosystem services, and the carbon cycle, especially in terms of understanding how the spatial and temporal complexities of damage sustained from disturbances influence forest structure and function. Vegetation change tracker (VCT) is a highly automated...

  5. A novel algorithm for fully automated mapping of geospatial ontologies

    NASA Astrophysics Data System (ADS)

    Chaabane, Sana; Jaziri, Wassim

    2018-01-01

    Geospatial information is collected from different sources thus making spatial ontologies, built for the same geographic domain, heterogeneous; therefore, different and heterogeneous conceptualizations may coexist. Ontology integrating helps creating a common repository of the geospatial ontology and allows removing the heterogeneities between the existing ontologies. Ontology mapping is a process used in ontologies integrating and consists in finding correspondences between the source ontologies. This paper deals with the "mapping" process of geospatial ontologies which consist in applying an automated algorithm in finding the correspondences between concepts referring to the definitions of matching relationships. The proposed algorithm called "geographic ontologies mapping algorithm" defines three types of mapping: semantic, topological and spatial.

  6. Advanced paratransit system : an application of digital map, automated vehicle scheduling and vehicle location systems

    DOT National Transportation Integrated Search

    1997-05-01

    This report documents and evaluates an advanced Paratransit system demonstration project. The Santa Clara Valley Transportation Agency (SCVTA), via OUTREACH, implemented such a system, comprised of an automated trip scheduling system (ATSS) and autom...

  7. Reliability of drumlin morphometric data based on manual mapping - assessment of inter-mapper differences using a morphometrically diverse sample of relict drumlins

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.; Perkins, Andrew J.; Neudorf, Christina; Hillier, John K.; Cripps, Jonathan E.; Spagnolo, Matteo; Dinney, Meaghan; Storrar, Robert D.

    2016-04-01

    Mapper-dependent (subjective) differences in drumlin morphometry have received little attention even though over one-hundred thousand drumlins have been manually mapped and used to characterize drumlin morphometry and infer drumlin genesis, and several obstacles to objectivity in drumlin mapping can be identified. Due to uncertainty in drumlin genesis, drumlins remain putative morphogenetic landforms, yet still lack a complete single morphological definition. Additionally, post-formational degradation of relict subglacial landscapes challenges our ability: 1) to identify all drumlins in the landscape (some [potential] drumlins may be too degraded to be mapped and are thus excluded from the inventory), with implications for the analysis of field properties (e.g., spatial arrangement and autocorrelation); and 2) to accurately map the original footprint (i.e., shape and size). These issues (definitional ambiguity; degradation of original drumlin topography) are a problem for both manual and automated mapping. Automation is touted as the solution to the subjectivity of manual mapping, but the quality of any automated method directly depends on the quality of the operational definition (ruleset) it draws upon; if drumlin definitions are subjective (expert-dependent), so will be the automated algorithms relying on them. Additionally, recognizing highly degraded drumlins is, arguably, more difficult automatedly than manually (visually). Because a single morphologic definition is missing, mapping is expert-dependent. Therefore, quantifying the magnitude of inter-mapper differences is important for fully understanding the morphology of drumlins, constraining the robustness of drumlin morphometric inventories and assisting in the development of stricter operational definitions/mapping guidelines. We present the results of an experiment to quantify inter-mapper differences in mapped drumlin morphometry. All participants mapped 42 morphologically diverse drumlins in the Puget Lowland, WA at 2 spatial resolutions (1.8 m and 10.8 m cell size DEMs) in a GIS, using exactly the same base maps (analytical hillshade; semi-transparent elevation; contours) and informed by the same loose operational definition (e.g., drumlins delimited at their base by concave breaks in slope). Preliminary results (3 mappers) indicate that differences between manual mappers are substantial. For example, for the footprints mapped from the 10.8 m terrain data: average length ranges from 4603 m to 5454 m, and the mean absolute difference in length from 693 m to 1101 m; average elongation ratio (ER) ranges from 5.0 to 6.1; average footprint area ranges from 0.39 km2 to 0.50 km2.

  8. Using knowledge rules for pharmacy mapping.

    PubMed

    Shakib, Shaun C; Che, Chengjian; Lau, Lee Min

    2006-01-01

    The 3M Health Information Systems (HIS) Healthcare Data Dictionary (HDD) is used to encode and structure patient medication data for the Electronic Health Record (EHR) of the Department of Defense's (DoD's) Armed Forces Health Longitudinal Technology Application (AHLTA). HDD Subject Matter Experts (SMEs) are responsible for initial and maintenance mapping of disparate, standalone medication master files from all 100 DoD host sites worldwide to a single concept-based vocabulary, to accomplish semantic interoperability. To achieve higher levels of automation, SMEs began defining a growing set of knowledge rules. These knowledge rules were implemented in a pharmacy mapping tool, which enhanced consistency through automation and increased mapping rate by 29%.

  9. Enhanced visual perception through tone mapping

    NASA Astrophysics Data System (ADS)

    Harrison, Andre; Mullins, Linda L.; Raglin, Adrienne; Etienne-Cummings, Ralph

    2016-05-01

    Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.

  10. Extracting Lane Geometry and Topology Information from Vehicle Fleet Trajectories in Complex Urban Scenarios Using a Reversible Jump Mcmc Method

    NASA Astrophysics Data System (ADS)

    Roeth, O.; Zaum, D.; Brenner, C.

    2017-05-01

    Highly automated driving (HAD) requires maps not only of high spatial precision but also of yet unprecedented actuality. Traditionally small highly specialized fleets of measurement vehicles are used to generate such maps. Nevertheless, for achieving city-wide or even nation-wide coverage, automated map update mechanisms based on very large vehicle fleet data gain importance since highly frequent measurements are only to be obtained using such an approach. Furthermore, the processing of imprecise mass data in contrast to few dedicated highly accurate measurements calls for a high degree of automation. We present a method for the generation of lane-accurate road network maps from vehicle trajectory data (GPS or better). Our approach therefore allows for exploiting today's connected vehicle fleets for the generation of HAD maps. The presented algorithm is based on elementary building blocks which guarantees useful lane models and uses a Reversible Jump Markov chain Monte Carlo method to explore the models parameters in order to reconstruct the one most likely emitting the input data. The approach is applied to a challenging urban real-world scenario of different trajectory accuracy levels and is evaluated against a LIDAR-based ground truth map.

  11. Advanced Map For Real-Time Process Control

    NASA Astrophysics Data System (ADS)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  12. A unified approach to VLSI layout automation and algorithm mapping on processor arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.

    1993-01-01

    Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.

  13. Automated Processing of 2-D Gel Electrophoretograms of Genomic DNA for Hunting Pathogenic DNA Molecular Changes.

    PubMed

    Takahashi; Nakazawa; Watanabe; Konagaya

    1999-01-01

    We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.

  14. Automated clustering of probe molecules from solvent mapping of protein surfaces: new algorithms applied to hot-spot mapping and structure-based drug design

    NASA Astrophysics Data System (ADS)

    Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.

    2008-10-01

    Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.

  15. Mapping landscape corridors

    Treesearch

    Peter Vogt; Kurt H. Riitters; Marcin Iwanowski; Christine Estreguil; Jacek Kozak; Pierre Soille

    2007-01-01

    Corridors are important geographic features for biological conservation and biodiversity assessment. The identification and mapping of corridors is usually based on visual interpretations of movement patterns (functional corridors) or habitat maps (structural corridors). We present a method for automated corridor mapping with morphological image processing, and...

  16. Mapping the Recent US Hurricanes Triggered Flood Events in Near Real Time

    NASA Astrophysics Data System (ADS)

    Shen, X.; Lazin, R.; Anagnostou, E. N.; Wanik, D. W.; Brakenridge, G. R.

    2017-12-01

    Synthetic Aperture Radar (SAR) observations is the only reliable remote sensing data source to map flood inundation during severe weather events. Unfortunately, since state-of-art data processing algorithms cannot meet the automation and quality standard of a near-real-time (NRT) system, quality controlled inundation mapping by SAR currently depends heavily on manual processing, which limits our capability to quickly issue flood inundation maps at global scale. Specifically, most SAR-based inundation mapping algorithms are not fully automated, while those that are automated exhibit severe over- and/or under-detection errors that limit their potential. These detection errors are primarily caused by the strong overlap among the SAR backscattering probability density functions (PDF) of different land cover types. In this study, we tested a newly developed NRT SAR-based inundation mapping system, named Radar Produced Inundation Diary (RAPID), using Sentinel-1 dual polarized SAR data over recent flood events caused by Hurricanes Harvey, Irma, and Maria (2017). The system consists of 1) self-optimized multi-threshold classification, 2) over-detection removal using land-cover information and change detection, 3) under-detection compensation, and 4) machine-learning based correction. Algorithm details are introduced in another poster, H53J-1603. Good agreements were obtained by comparing the result from RAPID with visual interpretation of SAR images and manual processing from Dartmouth Flood Observatory (DFO) (See Figure 1). Specifically, the over- and under-detections that is typically noted in automated methods is significantly reduced to negligible levels. This performance indicates that RAPID can address the automation and accuracy issues of current state-of-art algorithms and has the potential to apply operationally on a number of satellite SAR missions, such as SWOT, ALOS, Sentinel etc. RAPID data can support many applications such as rapid assessment of damage losses and disaster alleviation/rescue at global scale.

  17. Automated mapping of persistent ice and snow cover across the western U.S. with Landsat

    NASA Astrophysics Data System (ADS)

    Selkowitz, David J.; Forster, Richard R.

    2016-07-01

    We implemented an automated approach for mapping persistent ice and snow cover (PISC) across the conterminous western U.S. using all available Landsat TM and ETM+ scenes acquired during the late summer/early fall period between 2010 and 2014. Two separate validation approaches indicate this dataset provides a more accurate representation of glacial ice and perennial snow cover for the region than either the U.S. glacier database derived from US Geological Survey (USGS) Digital Raster Graphics (DRG) maps (based on aerial photography primarily from the 1960s-1980s) or the National Land Cover Database 2011 perennial ice and snow cover class. Our 2010-2014 Landsat-derived dataset indicates 28% less glacier and perennial snow cover than the USGS DRG dataset. There are larger differences between the datasets in some regions, such as the Rocky Mountains of Northwest Wyoming and Southwest Montana, where the Landsat dataset indicates 54% less PISC area. Analysis of Landsat scenes from 1987-1988 and 2008-2010 for three regions using a more conventional, semi-automated approach indicates substantial decreases in glaciers and perennial snow cover that correlate with differences between PISC mapped by the USGS DRG dataset and the automated Landsat-derived dataset. This suggests that most of the differences in PISC between the USGS DRG and the Landsat-derived dataset can be attributed to decreases in PISC, as opposed to differences between mapping techniques. While the dataset produced by the automated Landsat mapping approach is not designed to serve as a conventional glacier inventory that provides glacier outlines and attribute information, it allows for an updated estimate of PISC for the conterminous U.S. as well as for smaller regions. Additionally, the new dataset highlights areas where decreases in PISC have been most significant over the past 25-50 years.

  18. Automated mapping of persistent ice and snow cover across the western U.S. with Landsat

    USGS Publications Warehouse

    Selkowitz, David J.; Forster, Richard R.

    2016-01-01

    We implemented an automated approach for mapping persistent ice and snow cover (PISC) across the conterminous western U.S. using all available Landsat TM and ETM+ scenes acquired during the late summer/early fall period between 2010 and 2014. Two separate validation approaches indicate this dataset provides a more accurate representation of glacial ice and perennial snow cover for the region than either the U.S. glacier database derived from US Geological Survey (USGS) Digital Raster Graphics (DRG) maps (based on aerial photography primarily from the 1960s–1980s) or the National Land Cover Database 2011 perennial ice and snow cover class. Our 2010–2014 Landsat-derived dataset indicates 28% less glacier and perennial snow cover than the USGS DRG dataset. There are larger differences between the datasets in some regions, such as the Rocky Mountains of Northwest Wyoming and Southwest Montana, where the Landsat dataset indicates 54% less PISC area. Analysis of Landsat scenes from 1987–1988 and 2008–2010 for three regions using a more conventional, semi-automated approach indicates substantial decreases in glaciers and perennial snow cover that correlate with differences between PISC mapped by the USGS DRG dataset and the automated Landsat-derived dataset. This suggests that most of the differences in PISC between the USGS DRG and the Landsat-derived dataset can be attributed to decreases in PISC, as opposed to differences between mapping techniques. While the dataset produced by the automated Landsat mapping approach is not designed to serve as a conventional glacier inventory that provides glacier outlines and attribute information, it allows for an updated estimate of PISC for the conterminous U.S. as well as for smaller regions. Additionally, the new dataset highlights areas where decreases in PISC have been most significant over the past 25–50 years.

  19. Performance of Automated Software in the Assessment of Segmental Left Ventricular Function in Cardiac CT: Comparison with Cardiac Magnetic Resonance.

    PubMed

    Wang, Rui; Meinel, Felix G; Schoepf, U Joseph; Canstein, Christian; Spearman, James V; De Cecco, Carlo N

    2015-12-01

    To evaluate the accuracy, reliability and time saving potential of a novel cardiac CT (CCT)-based, automated software for the assessment of segmental left ventricular function compared to visual and manual quantitative assessment of CCT and cardiac magnetic resonance (CMR). Forty-seven patients with suspected or known coronary artery disease (CAD) were enrolled in the study. Wall thickening was calculated. Segmental LV wall motion was automatically calculated and shown as a colour-coded polar map. Processing time for each method was recorded. Mean wall thickness in both systolic and diastolic phases on polar map, CCT, and CMR was 9.2 ± 0.1 mm and 14.9 ± 0.2 mm, 8.9 ± 0.1 mm and 14.5 ± 0.1 mm, 8.3 ± 0.1 mm and 13.6 ± 0.1 mm, respectively. Mean wall thickening was 68.4 ± 1.5 %, 64.8 ± 1.4 % and 67.1 ± 1.4 %, respectively. Agreement for the assessment of LV wall motion between CCT, CMR and polar maps was good. Bland-Altman plots and ICC indicated good agreement between CCT, CMR and automated polar maps of the diastolic and systolic segmental wall thickness and thickening. The processing time using polar map was significantly decreased compared with CCT and CMR. Automated evaluation of segmental LV function with polar maps provides similar measurements to manual CCT and CMR evaluation, albeit with substantially reduced analysis time. • Cardiac computed tomography (CCT) can accurately assess segmental left ventricular wall function. • A novel automated software permits accurate and fast evaluation of wall function. • The software may improve the clinical implementation of segmental functional analysis.

  20. Kohonen Self-Organizing Maps in Validity Maintenance for Automated Scoring of Constructed Response.

    ERIC Educational Resources Information Center

    Williamson, David M.; Bejar, Isaac I.

    As the automated scoring of constructed responses reaches operational status, monitoring the scoring process becomes a primary concern, particularly if automated scoring is intended to operate completely unassisted by humans. Using actual candidate selections from the Architectural Registration Examination (n=326), this study uses Kohonen…

  1. Using Knowledge Rules for Pharmacy Mapping

    PubMed Central

    Shakib, Shaun C.; Che, Chengjian; Lau, Lee Min

    2006-01-01

    The 3M Health Information Systems (HIS) Healthcare Data Dictionary (HDD) is used to encode and structure patient medication data for the Electronic Health Record (EHR) of the Department of Defense’s (DoD’s) Armed Forces Health Longitudinal Technology Application (AHLTA). HDD Subject Matter Experts (SMEs) are responsible for initial and maintenance mapping of disparate, standalone medication master files from all 100 DoD host sites worldwide to a single concept-based vocabulary, to accomplish semantic interoperability. To achieve higher levels of automation, SMEs began defining a growing set of knowledge rules. These knowledge rules were implemented in a pharmacy mapping tool, which enhanced consistency through automation and increased mapping rate by 29%. PMID:17238709

  2. Automated structural classification of lipids by machine learning.

    PubMed

    Taylor, Ryan; Miller, Ryan H; Miller, Ryan D; Porter, Michael; Dalgleish, James; Prince, John T

    2015-03-01

    Modern lipidomics is largely dependent upon structural ontologies because of the great diversity exhibited in the lipidome, but no automated lipid classification exists to facilitate this partitioning. The size of the putative lipidome far exceeds the number currently classified, despite a decade of work. Automated classification would benefit ongoing classification efforts by decreasing the time needed and increasing the accuracy of classification while providing classifications for mass spectral identification algorithms. We introduce a tool that automates classification into the LIPID MAPS ontology of known lipids with >95% accuracy and novel lipids with 63% accuracy. The classification is based upon simple chemical characteristics and modern machine learning algorithms. The decision trees produced are intelligible and can be used to clarify implicit assumptions about the current LIPID MAPS classification scheme. These characteristics and decision trees are made available to facilitate alternative implementations. We also discovered many hundreds of lipids that are currently misclassified in the LIPID MAPS database, strongly underscoring the need for automated classification. Source code and chemical characteristic lists as SMARTS search strings are available under an open-source license at https://www.github.com/princelab/lipid_classifier. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  4. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  5. Automated detection of submerged navigational obstructions in freshwater impoundments with hull mounted sidescan sonar

    NASA Astrophysics Data System (ADS)

    Morris, Phillip A.

    The prevalence of low-cost side scanning sonar systems mounted on small recreational vessels has created improved opportunities to identify and map submerged navigational hazards in freshwater impoundments. However, these economical sensors also present unique challenges for automated techniques. This research explores related literature in automated sonar imagery processing and mapping technology, proposes and implements a framework derived from these sources, and evaluates the approach with video collected from a recreational grade sonar system. Image analysis techniques including optical character recognition and an unsupervised computer automated detection (CAD) algorithm are employed to extract the transducer GPS coordinates and slant range distance of objects protruding from the lake bottom. The retrieved information is formatted for inclusion into a spatial mapping model. Specific attributes of the sonar sensors are modeled such that probability profiles may be projected onto a three dimensional gridded map. These profiles are computed from multiple points of view as sonar traces crisscross or come near each other. As lake levels fluctuate over time so do the elevation points of view. With each sonar record, the probability of a hazard existing at certain elevations at the respective grid points is updated with Bayesian mechanics. As reinforcing data is collected, the confidence of the map improves. Given a lake's current elevation and a vessel draft, a final generated map can identify areas of the lake that have a high probability of containing hazards that threaten navigation. The approach is implemented in C/C++ utilizing OpenCV, Tesseract OCR, and QGIS open source software and evaluated in a designated test area at Lake Lavon, Collin County, Texas.

  6. Automated structure solution, density modification and model building.

    PubMed

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  7. The relationship of acquisition systems to automated stereo correlation.

    USGS Publications Warehouse

    Colvocoresses, A.P.

    1983-01-01

    Today a concerted effort is being made to expedite the mapping process through automated correlation of stereo data. Stereo correlation involves the comparison of radiance (brightness) signals or patterns recorded by sensors. Conventionally, two-dimensional area correlation is utilized but this is a rather slow and cumbersome procedure. Digital correlation can be performed in only one dimension where suitable signal patterns exist, and the one-dimensional mode is much faster. Electro-optical (EO) systems, suitable for space use, also have much greater flexibility than film systems. Thus, an EO space system can be designed which will optimize one-dimensional stereo correlation and lead toward the automation of topographic mapping.-from Author

  8. A Comparison of Satellite-Derived Snow Maps with a Focus on Ephemeral Snow in North Carolina

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Fuhrmann, Christopher M.; Perry, L. Baker; Riggs, George A.; Robinson, David A.; Foster, James L.

    2010-01-01

    In this paper, we focus on the attributes and limitations of four commonly-used daily snowcover products with respect to their ability to map ephemeral snow in central and eastern North Carolina. We show that the Moderate-Resolution Imaging Spectroradiometer (MODIS) fractional snow-cover maps can delineate the snow-covered area very well through the use of a fully-automated algorithm, but suffer from the limitation that cloud cover precludes mapping some ephemeral snow. The semi-automated Interactive Multi-sensor Snow and ice mapping system (IMS) and Rutgers Global Snow Lab (GSL) snow maps are often able to capture ephemeral snow cover because ground-station data are employed to develop the snow maps, The Rutgers GSL maps are based on the IMS maps. Finally, the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) provides some good detail of snow-water equivalent especially in deeper snow, but may miss ephemeral snow cover because it is often very thin or wet; the AMSR-E maps also suffer from coarse spatial resolution. We conclude that the southeastern United States represents a good test region for validating the ability of satellite snow-cover maps to capture ephemeral snow cover,

  9. The Choice between MapMan and Gene Ontology for Automated Gene Function Prediction in Plant Science

    PubMed Central

    Klie, Sebastian; Nikoloski, Zoran

    2012-01-01

    Since the introduction of the Gene Ontology (GO), the analysis of high-throughput data has become tightly coupled with the use of ontologies to establish associations between knowledge and data in an automated fashion. Ontologies provide a systematic description of knowledge by a controlled vocabulary of defined structure in which ontological concepts are connected by pre-defined relationships. In plant science, MapMan and GO offer two alternatives for ontology-driven analyses. Unlike GO, initially developed to characterize microbial systems, MapMan was specifically designed to cover plant-specific pathways and processes. While the dependencies between concepts in MapMan are modeled as a tree, in GO these are captured in a directed acyclic graph. Therefore, the difference in ontologies may cause discrepancies in data reduction, visualization, and hypothesis generation. Here provide the first systematic comparative analysis of GO and MapMan for the case of the model plant species Arabidopsis thaliana (Arabidopsis) with respect to their structural properties and difference in distributions of information content. In addition, we investigate the effect of the two ontologies on the specificity and sensitivity of automated gene function prediction via the coupling of co-expression networks and the guilt-by-association principle. Automated gene function prediction is particularly needed for the model plant Arabidopsis in which only half of genes have been functionally annotated based on sequence similarity to known genes. The results highlight the need for structured representation of species-specific biological knowledge, and warrants caution in the design principles employed in future ontologies. PMID:22754563

  10. Assessing the Agreement Between Eo-Based Semi-Automated Landslide Maps with Fuzzy Manual Landslide Delineation

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Hölbling, D.; Friedl, B.

    2017-09-01

    Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.

  11. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating

    NASA Astrophysics Data System (ADS)

    Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero

    2017-06-01

    During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.

  12. LC-MS/MS Peptide Mapping with Automated Data Processing for Routine Profiling of N-Glycans in Immunoglobulins

    NASA Astrophysics Data System (ADS)

    Shah, Bhavana; Jiang, Xinzhao Grace; Chen, Louise; Zhang, Zhongqi

    2014-06-01

    Protein N-Glycan analysis is traditionally performed by high pH anion exchange chromatography (HPAEC), reversed phase liquid chromatography (RPLC), or hydrophilic interaction liquid chromatography (HILIC) on fluorescence-labeled glycans enzymatically released from the glycoprotein. These methods require time-consuming sample preparations and do not provide site-specific glycosylation information. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) peptide mapping is frequently used for protein structural characterization and, as a bonus, can potentially provide glycan profile on each individual glycosylation site. In this work, a recently developed glycopeptide fragmentation model was used for automated identification, based on their MS/MS, of N-glycopeptides from proteolytic digestion of monoclonal antibodies (mAbs). Experimental conditions were optimized to achieve accurate profiling of glycoforms. Glycan profiles obtained from LC-MS/MS peptide mapping were compared with those obtained from HPAEC, RPLC, and HILIC analyses of released glycans for several mAb molecules. Accuracy, reproducibility, and linearity of the LC-MS/MS peptide mapping method for glycan profiling were evaluated. The LC-MS/MS peptide mapping method with fully automated data analysis requires less sample preparation, provides site-specific information, and may serve as an alternative method for routine profiling of N-glycans on immunoglobulins as well as other glycoproteins with simple N-glycans.

  13. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  14. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  15. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data

    PubMed Central

    Niedworok, Christian J.; Brown, Alexander P. Y.; Jorge Cardoso, M.; Osten, Pavel; Ourselin, Sebastien; Modat, Marc; Margrie, Troy W.

    2016-01-01

    The validation of automated image registration and segmentation is crucial for accurate and reliable mapping of brain connectivity and function in three-dimensional (3D) data sets. While validation standards are necessarily high and routinely met in the clinical arena, they have to date been lacking for high-resolution microscopy data sets obtained from the rodent brain. Here we present a tool for optimized automated mouse atlas propagation (aMAP) based on clinical registration software (NiftyReg) for anatomical segmentation of high-resolution 3D fluorescence images of the adult mouse brain. We empirically evaluate aMAP as a method for registration and subsequent segmentation by validating it against the performance of expert human raters. This study therefore establishes a benchmark standard for mapping the molecular function and cellular connectivity of the rodent brain. PMID:27384127

  16. The Buccaneer software for automated model building. 1. Tracing protein chains.

    PubMed

    Cowtan, Kevin

    2006-09-01

    A new technique for the automated tracing of protein chains in experimental electron-density maps is described. The technique relies on the repeated application of an oriented electron-density likelihood target function to identify likely C(alpha) positions. This function is applied both in the location of a few promising ;seed' positions in the map and to grow those initial C(alpha) positions into extended chain fragments. Techniques for assembling the chain fragments into an initial chain trace are discussed.

  17. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    ERIC Educational Resources Information Center

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  18. Role of post-mapping computed tomography in virtual-assisted lung mapping.

    PubMed

    Sato, Masaaki; Nagayama, Kazuhiro; Kuwano, Hideki; Nitadori, Jun-Ichi; Anraku, Masaki; Nakajima, Jun

    2017-02-01

    Background Virtual-assisted lung mapping is a novel bronchoscopic preoperative lung marking technique in which virtual bronchoscopy is used to predict the locations of multiple dye markings. Post-mapping computed tomography is performed to confirm the locations of the actual markings. This study aimed to examine the accuracy of marking locations predicted by virtual bronchoscopy and elucidate the role of post-mapping computed tomography. Methods Automated and manual virtual bronchoscopy was used to predict marking locations. After bronchoscopic dye marking under local anesthesia, computed tomography was performed to confirm the actual marking locations before surgery. Discrepancies between marking locations predicted by the different methods and the actual markings were examined on computed tomography images. Forty-three markings in 11 patients were analyzed. Results The average difference between the predicted and actual marking locations was 30 mm. There was no significant difference between the latest version of the automated virtual bronchoscopy system (30.7 ± 17.2 mm) and manual virtual bronchoscopy (29.8 ± 19.1 mm). The difference was significantly greater in the upper vs. lower lobes (37.1 ± 20.1 vs. 23.0 ± 6.8 mm, for automated virtual bronchoscopy; p < 0.01). Despite this discrepancy, all targeted lesions were successfully resected using 3-dimensional image guidance based on post-mapping computed tomography reflecting the actual marking locations. Conclusions Markings predicted by virtual bronchoscopy were dislocated from the actual markings by an average of 3 cm. However, surgery was accurately performed using post-mapping computed tomography guidance, demonstrating the indispensable role of post-mapping computed tomography in virtual-assisted lung mapping.

  19. Reconstruction of biological pathways and metabolic networks from in silico labeled metabolites.

    PubMed

    Hadadi, Noushin; Hafner, Jasmin; Soh, Keng Cher; Hatzimanikatis, Vassily

    2017-01-01

    Reaction atom mappings track the positional changes of all of the atoms between the substrates and the products as they undergo the biochemical transformation. However, information on atom transitions in the context of metabolic pathways is not widely available in the literature. The understanding of metabolic pathways at the atomic level is of great importance as it can deconvolute the overlapping catabolic/anabolic pathways resulting in the observed metabolic phenotype. The automated identification of atom transitions within a metabolic network is a very challenging task since the degree of complexity of metabolic networks dramatically increases when we transit from metabolite-level studies to atom-level studies. Despite being studied extensively in various approaches, the field of atom mapping of metabolic networks is lacking an automated approach, which (i) accounts for the information of reaction mechanism for atom mapping and (ii) is extendable from individual atom-mapped reactions to atom-mapped reaction networks. Hereby, we introduce a computational framework, iAM.NICE (in silico Atom Mapped Network Integrated Computational Explorer), for the systematic atom-level reconstruction of metabolic networks from in silico labelled substrates. iAM.NICE is to our knowledge the first automated atom-mapping algorithm that is based on the underlying enzymatic biotransformation mechanisms, and its application goes beyond individual reactions and it can be used for the reconstruction of atom-mapped metabolic networks. We illustrate the applicability of our method through the reconstruction of atom-mapped reactions of the KEGG database and we provide an example of an atom-level representation of the core metabolic network of E. coli. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. (Semi-)Automated landform mapping of the alpine valley Gradental (Austria) based on LiDAR data

    NASA Astrophysics Data System (ADS)

    Strasser, T.; Eisank, C.

    2012-04-01

    Alpine valleys are typically characterised as complex, hierarchical structured systems with rapid landform changes. Detection of landform changes can be supported by automated geomorphological mapping. Especially, the analysis over short time scales require a method for standardised, unbiased geomorphological map reproduction, which is delivered by automated mapping techniques. In general, digital geomorphological mapping is a challenging task, since knowledge about landforms with respect to their natural boundaries as well as their hierarchical and scaling relationships, has to be integrated in an objective way. A combination of very-high spatial resolution data (VHSR) such as LiDAR and new methods like object based image analysis (OBIA) allow for a more standardised production of geomorphological maps. In OBIA the processing units are spatially configured objects that are created by multi-scale segmentation. Therefore, not only spectral information can be used for assigning the objects to geomorphological classes, but also spatial and topological properties can be exploited. In this study we focus on the detection of landforms, especially bedrock sediment deposits (alluvion, debris cone, talus, moraine, rockglacier), as well as glaciers. The study site Gradental [N 46°58'29.1"/ E 12°48'53.8"] is located in the Schobergruppe (Austria, Carinthia) and is characterised by heterogenic geology conditions and high process activity. The area is difficult to access and dominated by steep slopes, thus hindering a fast and detailed geomorphological field mapping. Landforms are identified using aerial and terrestrial LiDAR data (1 m spatial resolution). These DEMs are analysed by an object based hierarchical approach, which is structured in three main steps. The first step is to define occurring landforms by basic land surface parameters (LSPs), topology and hierarchy relations. Based on those definitions a semantic model is created. Secondly, a multi-scale segmentation is performed on a three-band LSP that integrates slope, aspect and plan curvature, which expresses the driving forces of geomorphological processes. In the third step, the generated multi-level object structures are classified in order to produce the geomorphological map. The classification rules are derived from the semantic model. Due to landform type-specific scale dependencies of LSPs, the values of LSPs used in the classification are calculated in a multi-scale manner by constantly enlarging the size of the moving window. In addition, object form properties (density, compactness, rectangular fit) are utilised as additional information for landform characterisation. Validation of classification is performed by intersecting a visually interpreted reference map with the classification output map and calculating accuracy matrices. Validation shows an overall accuracy of 78.25 % and a Kappa of 0.65. The natural borders of landforms can be easily detected by the use of slope, aspect and plan curvature. This study illustrates the potential of OBIA for a more standardised and automated mapping of surface units (landforms, landcover). Therefore, the presented methodology features a prospective automated geomorphological mapping approach for alpine regions.

  1. Evaluation of automated global mapping of Reference Soil Groups of WRB2015

    NASA Astrophysics Data System (ADS)

    Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria

    2017-04-01

    SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992

  2. Description and validation of an automated methodology for mapping mineralogy, vegetation, and hydrothermal alteration type from ASTER satellite imagery with examples from the San Juan Mountains, Colorado

    USGS Publications Warehouse

    Rockwell, Barnaby W.

    2012-01-01

    The efficacy of airborne spectroscopic, or "hyperspectral," remote sensing for geoenvironmental watershed evaluations and deposit-scale mapping of exposed mineral deposits has been demonstrated. However, the acquisition, processing, and analysis of such airborne data at regional and national scales can be time and cost prohibitive. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor carried by the NASA Earth Observing System Terra satellite was designed for mineral mapping and the acquired data can be efficiently used to generate uniform mineral maps over very large areas. Multispectral remote sensing data acquired by the ASTER sensor were analyzed to identify and map minerals, mineral groups, hydrothermal alteration types, and vegetation groups in the western San Juan Mountains, Colorado, including the Silverton and Lake City calderas. This mapping was performed in support of multidisciplinary studies involving the predictive modeling of surface water geochemistry at watershed and regional scales. Detailed maps of minerals, vegetation groups, and water were produced from an ASTER scene using spectroscopic, expert system-based analysis techniques which have been previously described. New methodologies are presented for the modeling of hydrothermal alteration type based on the Boolean combination of the detailed mineral maps, and for the entirely automated mapping of alteration types, mineral groups, and green vegetation. Results of these methodologies are compared with the more detailed maps and with previously published mineral mapping results derived from analysis of high-resolution spectroscopic data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. Such comparisons are also presented for other mineralized and (or) altered areas including the Goldfield and Cuprite mining districts, Nevada and the central Marysvale volcanic field, Wah Wah Mountains, and San Francisco Mountains, Utah. The automated mineral group mapping products described in this study are ideal for application to mineral resource and mineral-environmental assessments at regional and national scales.

  3. Operational shoreline mapping with high spatial resolution radar and geographic processing

    USGS Publications Warehouse

    Rangoonwala, Amina; Jones, Cathleen E; Chi, Zhaohui; Ramsey, Elijah W.

    2017-01-01

    A comprehensive mapping technology was developed utilizing standard image processing and available GIS procedures to automate shoreline identification and mapping from 2 m synthetic aperture radar (SAR) HH amplitude data. The development used four NASA Uninhabited Aerial Vehicle SAR (UAVSAR) data collections between summer 2009 and 2012 and a fall 2012 collection of wetlands dominantly fronted by vegetated shorelines along the Mississippi River Delta that are beset by severe storms, toxic releases, and relative sea-level rise. In comparison to shorelines interpreted from 0.3 m and 1 m orthophotography, the automated GIS 10 m alongshore sampling found SAR shoreline mapping accuracy to be ±2 m, well within the lower range of reported shoreline mapping accuracies. The high comparability was obtained even though water levels differed between the SAR and photography image pairs and included all shorelines regardless of complexity. The SAR mapping technology is highly repeatable and extendable to other SAR instruments with similar operational functionality.

  4. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery.

    PubMed

    Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M

    2018-01-01

    Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.

  5. Automated mapping of soybean and corn using phenology

    NASA Astrophysics Data System (ADS)

    Zhong, Liheng; Hu, Lina; Yu, Le; Gong, Peng; Biging, Gregory S.

    2016-09-01

    For the two of the most important agricultural commodities, soybean and corn, remote sensing plays a substantial role in delivering timely information on the crop area for economic, environmental and policy studies. Traditional long-term mapping of soybean and corn is challenging as a result of the high cost of repeated training data collection, the inconsistency in image process and interpretation, and the difficulty of handling the inter-annual variability of weather and crop progress. In this study, we developed an automated approach to map soybean and corn in the state of Paraná, Brazil for crop years 2010-2015. The core of the approach is a decision tree classifier with rules manually built based on expert interaction for repeated use. The automated approach is advantageous for its capacity of multi-year mapping without the need to re-train or re-calibrate the classifier. Time series MODerate-resolution Imaging Spectroradiometer (MODIS) reflectance product (MCD43A4) were employed to derive vegetation phenology to identify soybean and corn based on crop calendar. To deal with the phenological similarity between soybean and corn, the surface reflectance of the shortwave infrared band scaled to a phenological stage was used to fully separate the two crops. Results suggested that the mapped areas of soybean and corn agreed with official statistics at the municipal level. The resultant map in the crop year 2012 was evaluated using an independent reference data set, and the overall accuracy and Kappa coefficient were 87.2% and 0.804 respectively. As a result of mixed pixel effect at the 500 m resolution, classification results were biased depending on topography. In the flat, broad and highly-cropped areas, uncultivated lands were likely to be identified as soybean or corn, causing over-estimation of cropland area. By contrast, scattered crop fields in mountainous regions with dense natural vegetation tend to be overlooked. For future mapping efforts, it has great potential to apply the automated mapping algorithm to other image series at various scales especially high-resolution images.

  6. An automated mapping satellite system ( Mapsat).

    USGS Publications Warehouse

    Colvocoresses, A.P.

    1982-01-01

    The favorable environment of space permits a satellite to orbit the Earth with very high stability as long as no local perturbing forces are involved. Solid-state linear-array sensors have no moving parts and create no perturbing force on the satellite. Digital data from highly stabilized stereo linear arrays are amenable to simplified processing to produce both planimetric imagery and elevation data. A satellite imaging system, called Mapsat, including this concept has been proposed to produce data from which automated mapping in near real time can be accomplished. Image maps as large as 1:50 000 scale with contours as close as a 20-m interval may be produced from Mapsat data. -from Author

  7. Verification of the WFAS Lightning Efficiency Map

    Treesearch

    Paul Sopko; Don Latham; Isaac Grenfell

    2007-01-01

    A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...

  8. Automated matching of multiple terrestrial laser scans for stem mapping without the use of artificial references

    NASA Astrophysics Data System (ADS)

    Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi

    2017-04-01

    Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.

  9. Comparing automated classification and digitization approaches to detect change in eelgrass bed extent during restoration of a large river delta

    USGS Publications Warehouse

    Davenport, Anna Elizabeth; Davis, Jerry D.; Woo, Isa; Grossman, Eric; Barham, Jesse B.; Ellings, Christopher S.; Takekawa, John Y.

    2017-01-01

    Native eelgrass (Zostera marina) is an important contributor to ecosystem services that supplies cover for juvenile fish, supports a variety of invertebrate prey resources for fish and waterbirds, provides substrate for herring roe consumed by numerous fish and birds, helps stabilize sediment, and sequesters organic carbon. Seagrasses are in decline globally, and monitoring changes in their growth and extent is increasingly valuable to determine impacts from large-scale estuarine restoration and inform blue carbon mapping initiatives. Thus, we examined the efficacy of two remote sensing mapping methods with high-resolution (0.5 m pixel size) color near infrared imagery with ground validation to assess change following major tidal marsh restoration. Automated classification of false color aerial imagery and digitized polygons documented a slight decline in eelgrass area directly after restoration followed by an increase two years later. Classification of sparse and low to medium density eelgrass was confounded in areas with algal cover, however large dense patches of eelgrass were well delineated. Automated classification of aerial imagery from unsupervised and supervised methods provided reasonable accuracies of 73% and hand-digitizing polygons from the same imagery yielded similar results. Visual clues for hand digitizing from the high-resolution imagery provided as reliable a map of dense eelgrass extent as automated image classification. We found that automated classification had no advantages over manual digitization particularly because of the limitations of detecting eelgrass with only three bands of imagery and near infrared.

  10. Off-the-Wall Project Brings Aerial Mapping down to Earth

    ERIC Educational Resources Information Center

    Davidhazy, Andrew

    2008-01-01

    The technology of aerial photography, photogrametry, has widespread applications in mapping and aerial surveying. A multi-billion-dollar industry, aerial surveying and mapping is "big business" in both civilian and military sectors. While the industry has grown increasingly automated, employment opportunities still exist for people with a basic…

  11. SnoMAP: Pioneering the Path for Clinical Coding to Improve Patient Care.

    PubMed

    Lawley, Michael; Truran, Donna; Hansen, David; Good, Norm; Staib, Andrew; Sullivan, Clair

    2017-01-01

    The increasing demand for healthcare and the static resources available necessitate data driven improvements in healthcare at large scale. The SnoMAP tool was rapidly developed to provide an automated solution that transforms and maps clinician-entered data to provide data which is fit for both administrative and clinical purposes. Accuracy of data mapping was maintained.

  12. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    NASA Astrophysics Data System (ADS)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  13. On Feature Extraction from Large Scale Linear LiDAR Data

    NASA Astrophysics Data System (ADS)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.

  14. Preliminary investigation of submerged aquatic vegetation mapping using hyperspectral remote sensing.

    PubMed

    William, David J; Rybicki, Nancy B; Lombana, Alfonso V; O'Brien, Tim M; Gomez, Richard B

    2003-01-01

    The use of airborne hyperspectral remote sensing imagery for automated mapping of submerged aquatic vegetation (SAV) in the tidal Potomac River was investigated for near to real-time resource assessment and monitoring. Airborne hyperspectral imagery and field spectrometer measurements were obtained in October of 2000. A spectral library database containing selected ground-based and airborne sensor spectra was developed for use in image processing. The spectral library is used to automate the processing of hyperspectral imagery for potential real-time material identification and mapping. Field based spectra were compared to the airborne imagery using the database to identify and map two species of SAV (Myriophyllum spicatum and Vallisneria americana). Overall accuracy of the vegetation maps derived from hyperspectral imagery was determined by comparison to a product that combined aerial photography and field based sampling at the end of the SAV growing season. The algorithms and databases developed in this study will be useful with the current and forthcoming space-based hyperspectral remote sensing systems.

  15. Painting a picture across the landscape with ModelMap

    Treesearch

    Brian Cooke; Elizabeth Freeman; Gretchen Moisen; Tracey Frescino

    2017-01-01

    Scientists and statisticians working for the Rocky Mountain Research Station have created a software package that simplifies and automates many of the processes needed for converting models into maps. This software package, called ModelMap, has helped a variety of specialists and land managers to quickly convert data into easily understood graphical images. The...

  16. A high-resolution radiation hybrid map of the bovine genome

    USDA-ARS?s Scientific Manuscript database

    We are building high-resolution radiation hybrid maps of all 29 bovine autosomes and chromosome X, using a 58,000-marker genotyping assay, and a 12,000-rad whole-genome radiation hybrid (RH) panel. To accommodate the large number of markers, and to automate the map building procedure, a software pip...

  17. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  18. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    NASA Astrophysics Data System (ADS)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  19. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  20. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  1. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  2. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  3. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  4. 49 CFR 1104.2 - Document specifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...

  5. Automated pattern analysis: A newsilent partner in insect acoustic detection studies

    USDA-ARS?s Scientific Manuscript database

    This seminar reviews methods that have been developed for automated analysis of field-collected sounds used to estimate pest populations and guide insect pest management decisions. Several examples are presented of successful usage of acoustic technology to map insect distributions in field environ...

  6. Some Automated Cartography Developments at the Defense Mapping Agency.

    DTIC Science & Technology

    1981-01-01

    on a pantographic router creating a laminate step model which was moulded in plaster for carving Into a terrain model. This section will trace DMA’s...offering economical automation. Precision flatbed Concord plotters were brought into DMA with sufficiently programmable control computers to perform these

  7. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  8. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  9. MetaMapping the nursing procedure manual.

    PubMed

    Peace, Jane; Brennan, Patricia Flatley

    2006-01-01

    Nursing procedure manuals are an important resource for practice, but ensuring that the correct procedure can be located when needed is an ongoing challenge. This poster presents an approach used to automatically index nursing procedures with standardized nursing terminology. Although indexing yielded a low number of mappings, examination of successfully mapped terms, incorrect mappings, and unmapped terms reveals important information about the reasons automated indexing fails.

  10. IEMIS (Integrated Emergency Management Information System) Floodplain Mapping Based on a Lidar Derived Data Set.

    DTIC Science & Technology

    1988-02-05

    0-A193 971 IEMIS (INTEGRATED EMERGENCY MANAGEMENT INFORMATION SYSTEM ) FLOODPLRIN MAP.. (U) ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG HS J...illustrate the application of the automated mapping capabilities of the Integrated Emergency Management Information System (IEMIS) to FISs. Unclassified...mapping capabilities of the Integrated Emergency Management Information System (IEMIS) to FISs. II. BACKGROUND The concept of mounting laser ranging

  11. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery

    PubMed Central

    Buscombe, Daniel; Wheaton, Joseph M.

    2018-01-01

    Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449

  12. Communications among elements of a space construction ensemble

    NASA Technical Reports Server (NTRS)

    Davis, Randal L.; Grasso, Christopher A.

    1989-01-01

    Space construction projects will require careful coordination between managers, designers, manufacturers, operators, astronauts, and robots with large volumes of information of varying resolution, timeliness, and accuracy flowing between the distributed participants over computer communications networks. Within the CSC Operations Branch, we are researching the requirements and options for such communications. Based on our work to date, we feel that communications standards being developed by the International Standards Organization, the CCITT, and other groups can be applied to space construction. We are currently studying in depth how such standards can be used to communicate with robots and automated construction equipment used in a space project. Specifically, we are looking at how the Manufacturing Automation Protocol (MAP) and the Manufacturing Message Specification (MMS), which tie together computers and machines in automated factories, might be applied to space construction projects. Together with our CSC industrial partner Computer Technology Associates, we are developing a MAP/MMS companion standard for space construction and we will produce software to allow the MAP/MMS protocol to be used in our CSC operations testbed.

  13. Development of an automated processing system for potential fishing zone forecast

    NASA Astrophysics Data System (ADS)

    Ardianto, R.; Setiawan, A.; Hidayat, J. J.; Zaky, A. R.

    2017-01-01

    The Institute for Marine Research and Observation (IMRO) - Ministry of Marine Affairs and Fisheries Republic of Indonesia (MMAF) has developed a potential fishing zone (PFZ) forecast using satellite data, called Peta Prakiraan Daerah Penangkapan Ikan (PPDPI). Since 2005, IMRO disseminates everyday PPDPI maps for fisheries marine ports and 3 days average for national areas. The accuracy in determining the PFZ and processing time of maps depend much on the experience of the operators creating them. This paper presents our research in developing an automated processing system for PPDPI in order to increase the accuracy and shorten processing time. PFZ are identified by combining MODIS sea surface temperature (SST) and chlorophyll-a (CHL) data in order to detect the presence of upwelling, thermal fronts and biological productivity enhancement, where the integration of these phenomena generally representing the PFZ. The whole process involves data download, map geo-process as well as layout that are carried out automatically by Python and ArcPy. The results showed that the automated processing system could be used to reduce the operator’s dependence on determining PFZ and speed up processing time.

  14. Automated detection of qualitative spatio-temporal features in electrocardiac activation maps.

    PubMed

    Ironi, Liliana; Tentoni, Stefania

    2007-02-01

    This paper describes a piece of work aiming at the realization of a tool for the automated interpretation of electrocardiac maps. Such maps can capture a number of electrical conduction pathologies, such as arrhytmia, that can be missed by the analysis of traditional electrocardiograms. But, their introduction into the clinical practice is still far away as their interpretation requires skills that belongs to very few experts. Then, an automated interpretation tool would bridge the gap between the established research outcome and clinical practice with a consequent great impact on health care. Qualitative spatial reasoning can play a crucial role in the identification of spatio-temporal patterns and salient features that characterize the heart electrical activity. We adopted the spatial aggregation (SA) conceptual framework and an interplay of numerical and qualitative information to extract features from epicardial maps, and to make them available for reasoning tasks. Our focus is on epicardial activation isochrone maps as they are a synthetic representation of spatio-temporal aspects of the propagation of the electrical excitation. We provide a computational SA-based methodology to extract, from 3D epicardial data gathered over time, (1) the excitation wavefront structure, and (2) the salient features that characterize wavefront propagation and visually correspond to specific geometric objects. The proposed methodology provides a robust and efficient way to identify salient pieces of information in activation time maps. The hierarchical structure of the abstracted geometric objects, crucial in capturing the prominent information, facilitates the definition of general rules necessary to infer the correlation between pathophysiological patterns and wavefront structure and propagation.

  15. Advances in Scientific Investigation and Automation.

    ERIC Educational Resources Information Center

    Abt, Jeffrey; And Others

    1987-01-01

    Six articles address: (1) the impact of science on the physical examination and treatment of books; (2) equipment for physical examination of books; (3) research using the cyclotron for historical analysis; (4) scientific analysis of paper and ink in early maps; (5) recent advances in automation; and (6) cataloging standards. (MES)

  16. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2013-01-01

    Atchafalaya River, LA. Map Data: Google, United States Department of Agriculture Farm Ser- vice Agency, Europa Technologies AUTOMATED RIVER BANK...traverse morphologically smooth landscapes including rivers in sand or ice . Within these limitations, we hold that this technique rep- resents a valuable

  17. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  18. Pilot vehicle interface on the advanced fighter technology integration F-16

    NASA Technical Reports Server (NTRS)

    Dana, W. H.; Smith, W. B.; Howard, J. D.

    1986-01-01

    This paper focuses on the work load aspects of the pilot vehicle interface in regard to the new technologies tested during AMAS Phase II. Subjects discussed in this paper include: a wide field-of-view head-up display; automated maneuvering attack system/sensor tracker system; master modes that configure flight controls and mission avionics; a modified helmet mounted sight; improved multifunction display capability; a voice interactive command system; ride qualities during automated weapon delivery; a color moving map; an advanced digital map display; and a g-induced loss-of-consciousness and spatial disorientation autorecovery system.

  19. Mapping Partners Master Drug Dictionary to RxNorm using an NLP-based approach.

    PubMed

    Zhou, Li; Plasek, Joseph M; Mahoney, Lisa M; Chang, Frank Y; DiMaggio, Dana; Rocha, Roberto A

    2012-08-01

    To develop an automated method based on natural language processing (NLP) to facilitate the creation and maintenance of a mapping between RxNorm and a local medication terminology for interoperability and meaningful use purposes. We mapped 5961 terms from Partners Master Drug Dictionary (MDD) and 99 of the top prescribed medications to RxNorm. The mapping was conducted at both term and concept levels using an NLP tool, called MTERMS, followed by a manual review conducted by domain experts who created a gold standard mapping. The gold standard was used to assess the overall mapping between MDD and RxNorm and evaluate the performance of MTERMS. Overall, 74.7% of MDD terms and 82.8% of the top 99 terms had an exact semantic match to RxNorm. Compared to the gold standard, MTERMS achieved a precision of 99.8% and a recall of 73.9% when mapping all MDD terms, and a precision of 100% and a recall of 72.6% when mapping the top prescribed medications. The challenges and gaps in mapping MDD to RxNorm are mainly due to unique user or application requirements for representing drug concepts and the different modeling approaches inherent in the two terminologies. An automated approach based on NLP followed by human expert review is an efficient and feasible way for conducting dynamic mapping. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Improving Critical Thinking Using Web Based Argument Mapping Exercises with Automated Feedback

    ERIC Educational Resources Information Center

    Butchart, Sam; Forster, Daniella; Gold, Ian; Bigelow, John; Korb, Kevin; Oppy, Graham; Serrenti, Alexandra

    2009-01-01

    In this paper we describe a simple software system that allows students to practise their critical thinking skills by constructing argument maps of natural language arguments. As the students construct their maps of an argument, the system provides automatic, real time feedback on their progress. We outline the background and theoretical framework…

  1. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  2. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less

  3. Satellite Remote Sensing of Cropland Characteristics in 30m Resolution: The First North American Continental-Scale Classification on High Performance Computing Platforms

    NASA Astrophysics Data System (ADS)

    Massey, Richard

    Cropland characteristics and accurate maps of their spatial distribution are required to develop strategies for global food security by continental-scale assessments and agricultural land use policies. North America is the major producer and exporter of coarse grains, wheat, and other crops. While cropland characteristics such as crop types are available at country-scales in North America, however, at continental-scale cropland products are lacking at fine sufficient resolution such as 30m. Additionally, applications of automated, open, and rapid methods to map cropland characteristics over large areas without the need of ground samples are needed on efficient high performance computing platforms for timely and long-term cropland monitoring. In this study, I developed novel, automated, and open methods to map cropland extent, crop intensity, and crop types in the North American continent using large remote sensing datasets on high-performance computing platforms. First, a novel method was developed in this study to fuse pixel-based classification of continental-scale Landsat data using Random Forest algorithm available on Google Earth Engine cloud computing platform with an object-based classification approach, recursive hierarchical segmentation (RHSeg) to map cropland extent at continental scale. Using the fusion method, a continental-scale cropland extent map for North America at 30m spatial resolution for the nominal year 2010 was produced. In this map, the total cropland area for North America was estimated at 275.2 million hectares (Mha). This map was assessed for accuracy using randomly distributed samples derived from United States Department of Agriculture (USDA) cropland data layer (CDL), Agriculture and Agri-Food Canada (AAFC) annual crop inventory (ACI), Servicio de Informacion Agroalimentaria y Pesquera (SIAP), Mexico's agricultural boundaries, and photo-interpretation of high-resolution imagery. The overall accuracies of the map are 93.4% with a producer's accuracy for crop class at 85.4% and user's accuracy of 74.5% across the continent. The sub-country statistics including state-wise and county-wise cropland statistics derived from this map compared well in regression models resulting in R2 > 0.84. Secondly, an automated phenological pattern matching (PPM) method to efficiently map cropping intensity was also developed in this study. This study presents a continental-scale cropping intensity map for the North American continent at 250m spatial resolution for 2010. In this map, the total areas for single crop, double crop, continuous crop, and fallow were estimated to be 123.5 Mha, 11.1 Mha, 64.0 Mha, and 83.4 Mha, respectively. This map was assessed using limited country-level reference datasets derived from United States Department of Agriculture cropland data layer and Agriculture and Agri-Food Canada annual crop inventory with overall accuracies of 79.8% and 80.2%, respectively. Third, two novel and automated decision tree classification approaches to map crop types across the conterminous United States (U.S.) using MODIS 250 m resolution data: 1) generalized, and 2) year-specific classification were developed. The classification approaches use similarities and dissimilarities in crop type phenology derived from NDVI time-series data for the two approaches. Annual crop type maps were produced for 8 major crop types in the United States using the generalized classification approach for 2001-2014 and the year-specific approach for 2008, 2010, 2011 and 2012. The year-specific classification had overall accuracies greater than 78%, while the generalized classifier had accuracies greater than 75% for the conterminous U.S. for 2008, 2010, 2011, and 2012. The generalized classifier enables automated and routine crop type mapping without repeated and expensive ground sample collection year after year with overall accuracies > 70% across all independent years. Taken together, these cropland products of extent, cropping intensity, and crop types, are significantly beneficial in agricultural and water use planning and monitoring to formulate policies towards global and North American food security issues.

  4. An algorithm for automated layout of process description maps drawn in SBGN.

    PubMed

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  5. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  6. The Colorado experience (evaluation and selection of hardware for automated, geo-based information systems)

    NASA Technical Reports Server (NTRS)

    Sonnenl, D.

    1981-01-01

    A turnkey system which gives technical assistance to legislative redistricting and state census data affiliate activities is described. The procedures followed for the acquisition of the Colorado automated census mapping system are presented. Price and performance criteria of the system were examined and the system architecture is outlined.

  7. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  8. PRELIMINARY INVESTIGATION OF SUBMERGED AQUATIC VEGETATION MAPPING USING HYPERSPECTRAL REMOTE SENSING

    EPA Science Inventory

    The use of airborne hyperspectral remote sensing imagery for automated mapping of submersed aquatic vegetation in the tidal Potomac River was investigated for near to real-time resource assessment and monitoring. Airborne hyperspectral imagery, together with in-situ spectral refl...

  9. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  10. Visualizing statistical significance of disease clusters using cartograms.

    PubMed

    Kronenfeld, Barry J; Wong, David W S

    2017-05-15

    Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.

  11. Object-oriented classification of drumlins from digital elevation models

    NASA Astrophysics Data System (ADS)

    Saha, Kakoli

    Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.

  12. Automated Seat Cushion for Pressure Ulcer Prevention Using Real-Time Mapping, Offloading, and Redistribution of Interface Pressure

    DTIC Science & Technology

    2016-10-01

    testing as well as finite element simulation. Automation and control testing has been completed on a 5x5 array of bubble actuators to verify pressure...mechanical behavior at varying loads and internal pressures both by experimental testing as well as finite element simulation. Automation and control...A finite element (FE) model of the bubble actuator was developed in the commercial software ANSYS in order to determine the deformation of the

  13. Automated delineation and characterization of drumlins using a localized contour tree approach

    NASA Astrophysics Data System (ADS)

    Wang, Shujie; Wu, Qiusheng; Ward, Dylan

    2017-10-01

    Drumlins are ubiquitous landforms in previously glaciated regions, formed through a series of complex subglacial processes operating underneath the paleo-ice sheets. Accurate delineation and characterization of drumlins are essential for understanding the formation mechanism of drumlins as well as the flow behaviors and basal conditions of paleo-ice sheets. Automated mapping of drumlins is particularly important for examining the distribution patterns of drumlins across large spatial scales. This paper presents an automated vector-based approach to mapping drumlins from high-resolution light detection and ranging (LiDAR) data. The rationale is to extract a set of concentric contours by building localized contour trees and establishing topological relationships. This automated method can overcome the shortcomings of previously manual and automated methods for mapping drumlins, for instance, the azimuthal biases during the generation of shaded relief images. A case study was carried out over a portion of the New York Drumlin Field. Overall 1181 drumlins were identified from the LiDAR-derived DEM across the study region, which had been underestimated in previous literature. The delineation results were visually and statistically compared to the manual digitization results. The morphology of drumlins was characterized by quantifying the length, width, elongation ratio, height, area, and volume. Statistical and spatial analyses were conducted to examine the distribution pattern and spatial variability of drumlin size and form. The drumlins and the morphologic characteristics exhibit significant spatial clustering rather than randomly distributed patterns. The form of drumlins varies from ovoid to spindle shapes towards the downstream direction of paleo ice flows, along with the decrease in width, area, and volume. This observation is in line with previous studies, which may be explained by the variations in sediment thickness and/or the velocity increases of ice flows towards ice front.

  14. GIS methodology for geothermal play fairway analysis: Example from the Snake River Plain volcanic province

    USGS Publications Warehouse

    DeAngelo, Jacob; Shervais, John W.; Glen, Jonathan; Nielson, Dennis L.; Garg, Sabodh; Dobson, Patrick; Gasperikova, Erika; Sonnenthal, Eric; Visser, Charles; Liberty, Lee M.; Siler, Drew; Evans, James P.; Santellanes, Sean

    2016-01-01

    Play fairway analysis in geothermal exploration derives from a systematic methodology originally developed within the petroleum industry and is based on a geologic and hydrologic framework of identified geothermal systems. We are tailoring this methodology to study the geothermal resource potential of the Snake River Plain and surrounding region. This project has contributed to the success of this approach by cataloging the critical elements controlling exploitable hydrothermal systems, establishing risk matrices that evaluate these elements in terms of both probability of success and level of knowledge, and building automated tools to process results. ArcGIS was used to compile a range of different data types, which we refer to as ‘elements’ (e.g., faults, vents, heatflow…), with distinct characteristics and confidence values. Raw data for each element were transformed into data layers with a common format. Because different data types have different uncertainties, each evidence layer had an accompanying confidence layer, which reflects spatial variations in these uncertainties. Risk maps represent the product of evidence and confidence layers, and are the basic building blocks used to construct Common Risk Segment (CRS) maps for heat, permeability, and seal. CRS maps quantify the variable risk associated with each of these critical components. In a final step, the three CRS maps were combined into a Composite Common Risk Segment (CCRS) map for analysis that reveals favorable areas for geothermal exploration. Python scripts were developed to automate data processing and to enhance the flexibility of the data analysis. Python scripting provided the structure that makes a custom workflow possible. Nearly every tool available in the ArcGIS ArcToolbox can be executed using commands in the Python programming language. This enabled the construction of a group of tools that could automate most of the processing for the project. Currently, our tools are repeatable, scalable, modifiable, and transferrable, allowing us to automate the task of data analysis and the production of CRS and CCRS maps. Our ultimate goal is to produce a toolkit that can be imported into ArcGIS and applied to any geothermal play type, with fully tunable parameters that will allow for the production of multiple versions of the CRS and CCRS maps in order to better test for sensitivity and to validate results.

  15. AUTOMATED PRODUCTION OF SEAGRASS MAPS FROM SIDESCAN SONAR IMAGERY: ACCURACY, VARIABILITY AND PATCH RESOLUTION

    EPA Science Inventory

    Maps of seagrass beds are useful for monitoring estuarine condition, managing habitats, and modeling estuarine processes. We recently developed inexpensive methods for collecting and classifying sidescan sonar (SSS) imagery for seagrass presence in turbid waters as shallow as 1-...

  16. Mapping of Brain Activity by Automated Volume Analysis of Immediate Early Genes.

    PubMed

    Renier, Nicolas; Adams, Eliza L; Kirst, Christoph; Wu, Zhuhao; Azevedo, Ricardo; Kohl, Johannes; Autry, Anita E; Kadiri, Lolahon; Umadevi Venkataraju, Kannan; Zhou, Yu; Wang, Victoria X; Tang, Cheuk Y; Olsen, Olav; Dulac, Catherine; Osten, Pavel; Tessier-Lavigne, Marc

    2016-06-16

    Understanding how neural information is processed in physiological and pathological states would benefit from precise detection, localization, and quantification of the activity of all neurons across the entire brain, which has not, to date, been achieved in the mammalian brain. We introduce a pipeline for high-speed acquisition of brain activity at cellular resolution through profiling immediate early gene expression using immunostaining and light-sheet fluorescence imaging, followed by automated mapping and analysis of activity by an open-source software program we term ClearMap. We validate the pipeline first by analysis of brain regions activated in response to haloperidol. Next, we report new cortical regions downstream of whisker-evoked sensory processing during active exploration. Last, we combine activity mapping with axon tracing to uncover new brain regions differentially activated during parenting behavior. This pipeline is widely applicable to different experimental paradigms, including animal species for which transgenic activity reporters are not readily available. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Mapping of brain activity by automated volume analysis of immediate early genes

    PubMed Central

    Renier, Nicolas; Adams, Eliza L.; Kirst, Christoph; Wu, Zhuhao; Azevedo, Ricardo; Kohl, Johannes; Autry, Anita E.; Kadiri, Lolahon; Venkataraju, Kannan Umadevi; Zhou, Yu; Wang, Victoria X.; Tang, Cheuk Y.; Olsen, Olav; Dulac, Catherine; Osten, Pavel; Tessier-Lavigne, Marc

    2016-01-01

    Summary Understanding how neural information is processed in physiological and pathological states would benefit from precise detection, localization and quantification of the activity of all neurons across the entire brain, which has not to date been achieved in the mammalian brain. We introduce a pipeline for high speed acquisition of brain activity at cellular resolution through profiling immediate early gene expression using immunostaining and light-sheet fluorescence imaging, followed by automated mapping and analysis of activity by an open-source software program we term ClearMap. We validate the pipeline first by analysis of brain regions activated in response to Haloperidol. Next, we report new cortical regions downstream of whisker-evoked sensory processing during active exploration. Lastly, we combine activity mapping with axon tracing to uncover new brain regions differentially activated during parenting behavior. This pipeline is widely applicable to different experimental paradigms, including animal species for which transgenic activity reporters are not readily available. PMID:27238021

  18. Mind the gap! Automated concept map feedback supports students in writing cohesive explanations.

    PubMed

    Lachner, Andreas; Burkhart, Christian; Nückles, Matthias

    2017-03-01

    Many students are challenged with the demand of writing cohesive explanations. To support students in writing cohesive explanations, we developed a computer-based feedback tool that visualizes cohesion deficits of students' explanations in a concept map. We conducted three studies to investigate the effectiveness of such feedback as well as the underlying cognitive processes. In Study 1, we found that the concept map helped students identify potential cohesion gaps in their drafts and plan remedial revisions. In Study 2, students with concept map feedback conducted revisions that resulted in more locally and globally cohesive, and also more comprehensible, explanations than the explanations of students who revised without concept map feedback. In Study 3, we replicated the findings of Study 2 by and large. More importantly, students who had received concept map feedback on a training explanation 1 week later wrote a transfer explanation without feedback that was more cohesive than the explanation of students who had received no feedback on their training explanation. The automated concept map feedback appears to particularly support the evaluation phase of the revision process. Furthermore, the feedback enabled novice writers to acquire sustainable skills in writing cohesive explanations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  20. High resolution hybrid optical and acoustic sea floor maps (Invited)

    NASA Astrophysics Data System (ADS)

    Roman, C.; Inglis, G.

    2013-12-01

    This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.

  1. Automating the Fireshed Assessment Process with ArcGIS

    Treesearch

    Alan Ager; Klaus Barber

    2006-01-01

    A library of macros was developed to automate the Fireshed process within ArcGIS. The macros link a number of vegetation simulation and wildfire behavior models (FVS, SVS, FARSITE, and FlamMap) with ESRI geodatabases, desktop software (Access, Excel), and ArcGIS. The macros provide for (1) an interactive linkage between digital imagery, vegetation data, FVS-FFE, and...

  2. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.

  3. Evaluation of using digital gravity field models for zoning map creation

    NASA Astrophysics Data System (ADS)

    Loginov, Dmitry

    2018-05-01

    At the present time the digital cartographic models of geophysical fields are taking a special significance into geo-physical mapping. One of the important directions to their application is the creation of zoning maps, which allow taking into account the morphology of geophysical field in the implementation automated choice of contour intervals. The purpose of this work is the comparative evaluation of various digital models in the creation of integrated gravity field zoning map. For comparison were chosen the digital model of gravity field of Russia, created by the analog map with scale of 1 : 2 500 000, and the open global model of gravity field of the Earth - WGM2012. As a result of experimental works the four integrated gravity field zoning maps were obtained with using raw and processed data on each gravity field model. The study demonstrates the possibility of open data use to create integrated zoning maps with the condition to eliminate noise component of model by processing in specialized software systems. In this case, for solving problem of contour intervals automated choice the open digital models aren't inferior to regional models of gravity field, created for individual countries. This fact allows asserting about universality and independence of integrated zoning maps creation regardless of detail of a digital cartographic model of geo-physical fields.

  4. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  5. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  6. A COMPARISON OF MAPPED ESTIMATES OF LONG-TERM RUNOFF IN THE NORTHEAST UNITED STATES

    EPA Science Inventory

    We evaluated the relative accuracy of four methods of producing maps of long-term runoff for part of the northeast United States: MAN, a manual procedure that incorporates expert opinion in contour placement; RPRIS, an automated procedure based on water balance considerations, Pn...

  7. Palinspastic reconstruction of structure maps: an automated finite element approach with heterogeneous strain

    NASA Astrophysics Data System (ADS)

    Dunbar, John A.; Cook, Richard W.

    2003-07-01

    Existing methods for the palinspastic reconstruction of structure maps do not adequately account for heterogeneous rock strain and hence cannot accurately treat features such as fault terminations and non-cylindrical folds. We propose a new finite element formulation of the map reconstruction problem that treats such features explicitly. In this approach, a model of the map surface, with internal openings that honor the topology of the fault-gap network, is constructed of triangular finite elements. Both model building and reconstruction algorithms are guided by rules relating fault-gap topology to the kinematics of fault motion and are fully automated. We represent the total strain as the sum of a prescribed component of locally homogeneous simple shear and a minimum amount of heterogeneous residual strain. The region within which a particular orientation of simple shear is treated as homogenous can be as small as an individual element or as large as the entire map. For residual strain calculations, we treat the map surface as a hyperelastic membrane. A globally optimum reconstruction is found that unfolds the map while faithfully honoring assigned strain mechanisms, closes fault gaps without overlap or gap and imparts the least possible residual strain in the restored surface. The amount and distribution of the residual strain serves as a diagnostic tool for identifying mapping errors. The method can be used to reconstruct maps offset by any number of faults that terminate, branch and offset each other in arbitrarily complex ways.

  8. Reading Guided by Automated Graphical Representations: How Model-Based Text Visualizations Facilitate Learning in Reading Comprehension Tasks

    ERIC Educational Resources Information Center

    Pirnay-Dummer, Pablo; Ifenthaler, Dirk

    2011-01-01

    Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…

  9. Defining Platelet Function During Polytrauma

    DTIC Science & Technology

    2013-02-01

    calibrated automated thrombography, 3. Platelet-induced clot contraction and using viscoelastic measures such as TEG with Platelet Mapping™ and, 4. Flow...using calibrated automated thrombography (CAT). 3. Platelet-induced clot contraction and using viscoelastic measures such as TEG with Platelet Mapping...formation (such as Hemodyne’s platelet contractile force measurement and thromboelastrography). The degree to which certain injury patterns as well as

  10. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department

    PubMed Central

    Ni, Yizhao; Kennebeck, Stephanie; Dexheimer, Judith W; McAneney, Constance M; Tang, Huaxiu; Lingren, Todd; Li, Qi; Zhai, Haijun; Solti, Imre

    2015-01-01

    Objectives (1) To develop an automated eligibility screening (ES) approach for clinical trials in an urban tertiary care pediatric emergency department (ED); (2) to assess the effectiveness of natural language processing (NLP), information extraction (IE), and machine learning (ML) techniques on real-world clinical data and trials. Data and methods We collected eligibility criteria for 13 randomly selected, disease-specific clinical trials actively enrolling patients between January 1, 2010 and August 31, 2012. In parallel, we retrospectively selected data fields including demographics, laboratory data, and clinical notes from the electronic health record (EHR) to represent profiles of all 202795 patients visiting the ED during the same period. Leveraging NLP, IE, and ML technologies, the automated ES algorithms identified patients whose profiles matched the trial criteria to reduce the pool of candidates for staff screening. The performance was validated on both a physician-generated gold standard of trial–patient matches and a reference standard of historical trial–patient enrollment decisions, where workload, mean average precision (MAP), and recall were assessed. Results Compared with the case without automation, the workload with automated ES was reduced by 92% on the gold standard set, with a MAP of 62.9%. The automated ES achieved a 450% increase in trial screening efficiency. The findings on the gold standard set were confirmed by large-scale evaluation on the reference set of trial–patient matches. Discussion and conclusion By exploiting the text of trial criteria and the content of EHRs, we demonstrated that NLP-, IE-, and ML-based automated ES could successfully identify patients for clinical trials. PMID:25030032

  11. Actively Transmitting New DCPs - Hydrometeorological Automated Data System

    Science.gov Websites

    . 2016193 Map ALBERT 44409382 010000 AB ATHABASCA R. BEL. CASCADE RAPID 56.6203 -111.687 SIGNAL ENG. HG TA SIGNAL ENG. HG TA VB IM ID 2016104 Map ALBERT 4440F664 010000 AB ATHABASCA RIVER BELOW CROOKED R 56.5803 2016118 Map ALBERT 4441A4E2 010000 AB L. BOW R. BELOW TWIN VALLEY RES 50.2250 -113.397 HANDAR HG VB

  12. Experimental and Automated Analysis Techniques for High-resolution Electrical Mapping of Small Intestine Slow Wave Activity

    PubMed Central

    Angeli, Timothy R; O'Grady, Gregory; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; Du, Peng; Pullan, Andrew J; Bissett, Ian P

    2013-01-01

    Background/Aims Small intestine motility is governed by an electrical slow wave activity, and abnormal slow wave events have been associated with intestinal dysmotility. High-resolution (HR) techniques are necessary to analyze slow wave propagation, but progress has been limited by few available electrode options and laborious manual analysis. This study presents novel methods for in vivo HR mapping of small intestine slow wave activity. Methods Recordings were obtained from along the porcine small intestine using flexible printed circuit board arrays (256 electrodes; 4 mm spacing). Filtering options were compared, and analysis was automated through adaptations of the falling-edge variable-threshold (FEVT) algorithm and graphical visualization tools. Results A Savitzky-Golay filter was chosen with polynomial-order 9 and window size 1.7 seconds, which maintained 94% of slow wave amplitude, 57% of gradient and achieved a noise correction ratio of 0.083. Optimized FEVT parameters achieved 87% sensitivity and 90% positive-predictive value. Automated activation mapping and animation successfully revealed slow wave propagation patterns, and frequency, velocity, and amplitude were calculated and compared at 5 locations along the intestine (16.4 ± 0.3 cpm, 13.4 ± 1.7 mm/sec, and 43 ± 6 µV, respectively, in the proximal jejunum). Conclusions The methods developed and validated here will greatly assist small intestine HR mapping, and will enable experimental and translational work to evaluate small intestine motility in health and disease. PMID:23667749

  13. Automated transient detection in the STEREO Heliospheric Imagers.

    NASA Astrophysics Data System (ADS)

    Barnard, Luke; Scott, Chris; Owens, Mat; Lockwood, Mike; Tucker-Hood, Kim; Davies, Jackie

    2014-05-01

    Since the launch of the twin STEREO satellites, the heliospheric imagers (HI) have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out far into the heliosphere. A frequently used approach is to build a "J-map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. Therefore, it is desirable to develop an automated algorithm for the detection and tracking of the transient features observed in HI data. This is to some extent previously covered ground, as similar problems have been encountered in the analysis of coronagraph data and have led to the development of products such as CACtus etc. We present the results of our investigation into the automated detection of solar transients observed in J-maps formed from HI data. We use edge and line detection methods to identify transients in the J-maps, and then use kinematic models of the solar transient propagation (such as the fixed-phi and harmonic mean geometric models) to estimate the solar transients properties, such as transient speed and propagation direction, from the time-elongation profile. The effectiveness of this process is assessed by comparison of our results with a set of manually identified CMEs, extracted and analysed by the Solar Storm Watch Project. Solar Storm Watch is a citizen science project in which solar transients are identified in J-maps formed from HI data and tracked multiple times by different users. This allows the calculation of a consensus time-elongation profile for each event, and therefore does not suffer from the potential subjectivity of an individual researcher tracking an event. Furthermore, we present preliminary results regarding the estimation of the ambient solar wind speed from the automated analysis of the HI J-maps, by the tracking of numerous small scale features entrained into the ambient solar wind, which can only be tracked out to small elongations.

  14. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    PubMed

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  15. Precise Positioning of Uavs - Dealing with Challenging Rtk-Gps Measurement Conditions during Automated Uav Flights

    NASA Astrophysics Data System (ADS)

    Zimmermann, F.; Eling, C.; Klingbeil, L.; Kuhlmann, H.

    2017-08-01

    For some years now, UAVs (unmanned aerial vehicles) are commonly used for different mobile mapping applications, such as in the fields of surveying, mining or archeology. To improve the efficiency of these applications an automation of the flight as well as the processing of the collected data is currently aimed at. One precondition for an automated mapping with UAVs is that the georeferencing is performed directly with cm-accuracies or better. Usually, a cm-accurate direct positioning of UAVs is based on an onboard multi-sensor system, which consists of an RTK-capable (real-time kinematic) GPS (global positioning system) receiver and additional sensors (e.g. inertial sensors). In this case, the absolute positioning accuracy essentially depends on the local GPS measurement conditions. Especially during mobile mapping applications in urban areas, these conditions can be very challenging, due to a satellite shadowing, non-line-of sight receptions, signal diffraction or multipath effects. In this paper, two straightforward and easy to implement strategies will be described and analyzed, which improve the direct positioning accuracies for UAV-based mapping and surveying applications under challenging GPS measurement conditions. Based on a 3D model of the surrounding buildings and vegetation in the area of interest, a GPS geometry map is determined, which can be integrated in the flight planning process, to avoid GPS challenging environments as far as possible. If these challenging environments cannot be avoided, the GPS positioning solution is improved by using obstruction adaptive elevation masks, to mitigate systematic GPS errors in the RTK-GPS positioning. Simulations and results of field tests demonstrate the profit of both strategies.

  16. Automated Recognition of Vegetation and Water Bodies on the Territory of Megacities in Satellite Images of Visible and IR Bands

    NASA Astrophysics Data System (ADS)

    Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.

    2018-04-01

    Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.

  17. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  18. Rapid, automated mosaicking of the human corneal subbasal nerve plexus.

    PubMed

    Vaishnav, Yash J; Rucker, Stuart A; Saharia, Keshav; McNamara, Nancy A

    2017-11-27

    Corneal confocal microscopy (CCM) is an in vivo technique used to study corneal nerve morphology. The largest proportion of nerves innervating the cornea lie within the subbasal nerve plexus, where their morphology is altered by refractive surgery, diabetes and dry eye. The main limitations to clinical use of CCM as a diagnostic tool are the small field of view of CCM images and the lengthy time needed to quantify nerves in collected images. Here, we present a novel, rapid, fully automated technique to mosaic individual CCM images into wide-field maps of corneal nerves. We implemented an OpenCV image stitcher that accounts for corneal deformation and uses feature detection to stitch CCM images into a montage. The method takes 3-5 min to process and stitch 40-100 frames on an Amazon EC2 Micro instance. The speed, automation and ease of use conferred by this technique is the first step toward point of care evaluation of wide-field subbasal plexus (SBP) maps in a clinical setting.

  19. Building an automated problem list based on natural language processing: lessons learned in the early phase of development.

    PubMed

    Solti, Imre; Aaronson, Barry; Fletcher, Grant; Solti, Magdolna; Gennari, John H; Cooper, Melissa; Payne, Tom

    2008-11-06

    Detailed problem lists that comply with JCAHO requirements are important components of electronic health records. Besides improving continuity of care electronic problem lists could serve as foundation infrastructure for clinical trial recruitment, research, biosurveillance and billing informatics modules. However, physicians rarely maintain problem lists. Our team is building a system using MetaMap and UMLS to automatically populate the problem list. We report our early results evaluating the application. Three physicians generated gold standard problem lists for 100 cardiology ambulatory progress notes. Our application had 88% sensitivity and 66% precision using a non-modified UMLS dataset. The systemâs misses concentrated in the group of ambiguous problem list entries (Chi-square=27.12 p<0.0001). In addition to the explicit entries, the notes included 10% implicit entry candidates. MetaMap and UMLS are readily applicable to automate the problem list. Ambiguity in medical documents has consequences for performance evaluation of automated systems.

  20. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  1. Glaciated valleys in Europe and western Asia

    PubMed Central

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar

    2015-01-01

    In recent years, remote sensing, morphometric analysis, and other computational concepts and tools have invigorated the field of geomorphological mapping. Automated interpretation of digital terrain data based on impartial rules holds substantial promise for large dataset processing and objective landscape classification. However, the geomorphological realm presents tremendous complexity and challenges in the translation of qualitative descriptions into geomorphometric semantics. Here, the simple, conventional distinction of V-shaped fluvial and U-shaped glacial valleys was analyzed quantitatively using multi-scale curvature and a novel morphometric variable termed Difference of Minimum Curvature (DMC). We used this automated terrain analysis approach to produce a raster map at a scale of 1:6,000,000 showing the distribution of glaciated valleys across Europe and western Asia. The data set has a cell size of 3 arc seconds and consists of more than 40 billion grid cells. Glaciated U-shaped valleys commonly associated with erosion by warm-based glaciers are abundant in the alpine regions of mid Europe and western Asia but also occur at the margins of mountain ice sheets in Scandinavia. The high-level correspondence with field mapping and the fully transferable semantics validate this approach for automated analysis of yet unexplored terrain around the globe and qualify for potential applications on other planetary bodies like Mars. PMID:27019665

  2. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    PubMed Central

    Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468

  3. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    PubMed

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  4. Prospective study of atrial fibrillation termination during ablation guided by automated detection of fractionated electrograms.

    PubMed

    Porter, Michael; Spear, William; Akar, Joseph G; Helms, Ray; Brysiewicz, Neil; Santucci, Peter; Wilber, David J

    2008-06-01

    Complex fractionated atrial electrograms (CFAE) may identify critical sites for perpetuation of atrial fibrillation (AF) and provide useful targets for ablation. Current assessment of CFAE is subjective; automated detection algorithms may improve reproducibility, but their utility in guiding ablation has not been tested. In 67 patients presenting for initial AF ablation (42 paroxysmal, 25 persistent), LA and CS mapping were performed during induced or spontaneous AF. CFAE were identified by an online automated computer algorithm and displayed on electroanatomical maps. A mean of 28 +/- 18 sites/patient were identified (20 +/- 13% of mapped sites), and were more frequent during persistent AF. CFAE occurred most commonly within the CS, on the atrial septum, and around the pulmonary veins. Ablation initially targeting CFAE terminated AF in 88% of paroxysmal AF, but only 20% of persistent AF (P < 0.001). Subsequently, additional ablation was performed in all patients (PV isolation for paroxysmal AF, PV isolation + mitral and roof lines for persistent AF). Minimum follow-up was 1 year. One-year freedom from recurrent atrial arrhythmias without antiarrhythmic drug therapy after a single procedure was 90% for paroxysmal AF, and 68% for persistent AF. Ablation guided by automated detection of CFAE proved feasible, and was associated with a high AF termination rate in paroxysmal, but not persistent AF. As an adjunct to conventional techniques, it was associated with excellent long-term single procedure outcomes in both groups. Criteria for identifying optimal CFAE sites for ablation, and selection of patients most likely to benefit, require additional study.

  5. Automated mapping of clinical terms into SNOMED-CT. An application to codify procedures in pathology.

    PubMed

    Allones, J L; Martinez, D; Taboada, M

    2014-10-01

    Clinical terminologies are considered a key technology for capturing clinical data in a precise and standardized manner, which is critical to accurately exchange information among different applications, medical records and decision support systems. An important step to promote the real use of clinical terminologies, such as SNOMED-CT, is to facilitate the process of finding mappings between local terms of medical records and concepts of terminologies. In this paper, we propose a mapping tool to discover text-to-concept mappings in SNOMED-CT. Name-based techniques were combined with a query expansion system to generate alternative search terms, and with a strategy to analyze and take advantage of the semantic relationships of the SNOMED-CT concepts. The developed tool was evaluated and compared to the search services provided by two SNOMED-CT browsers. Our tool automatically mapped clinical terms from a Spanish glossary of procedures in pathology with 88.0% precision and 51.4% recall, providing a substantial improvement of recall (28% and 60%) over other publicly accessible mapping services. The improvements reached by the mapping tool are encouraging. Our results demonstrate the feasibility of accurately mapping clinical glossaries to SNOMED-CT concepts, by means a combination of structural, query expansion and named-based techniques. We have shown that SNOMED-CT is a great source of knowledge to infer synonyms for the medical domain. Results show that an automated query expansion system overcomes the challenge of vocabulary mismatch partially.

  6. The GAAIN Entity Mapper: An Active-Learning System for Medical Data Mapping.

    PubMed

    Ashish, Naveen; Dewan, Peehoo; Toga, Arthur W

    2015-01-01

    This work is focused on mapping biomedical datasets to a common representation, as an integral part of data harmonization for integrated biomedical data access and sharing. We present GEM, an intelligent software assistant for automated data mapping across different datasets or from a dataset to a common data model. The GEM system automates data mapping by providing precise suggestions for data element mappings. It leverages the detailed metadata about elements in associated dataset documentation such as data dictionaries that are typically available with biomedical datasets. It employs unsupervised text mining techniques to determine similarity between data elements and also employs machine-learning classifiers to identify element matches. It further provides an active-learning capability where the process of training the GEM system is optimized. Our experimental evaluations show that the GEM system provides highly accurate data mappings (over 90% accuracy) for real datasets of thousands of data elements each, in the Alzheimer's disease research domain. Further, the effort in training the system for new datasets is also optimized. We are currently employing the GEM system to map Alzheimer's disease datasets from around the globe into a common representation, as part of a global Alzheimer's disease integrated data sharing and analysis network called GAAIN. GEM achieves significantly higher data mapping accuracy for biomedical datasets compared to other state-of-the-art tools for database schema matching that have similar functionality. With the use of active-learning capabilities, the user effort in training the system is minimal.

  7. The GAAIN Entity Mapper: An Active-Learning System for Medical Data Mapping

    PubMed Central

    Ashish, Naveen; Dewan, Peehoo; Toga, Arthur W.

    2016-01-01

    This work is focused on mapping biomedical datasets to a common representation, as an integral part of data harmonization for integrated biomedical data access and sharing. We present GEM, an intelligent software assistant for automated data mapping across different datasets or from a dataset to a common data model. The GEM system automates data mapping by providing precise suggestions for data element mappings. It leverages the detailed metadata about elements in associated dataset documentation such as data dictionaries that are typically available with biomedical datasets. It employs unsupervised text mining techniques to determine similarity between data elements and also employs machine-learning classifiers to identify element matches. It further provides an active-learning capability where the process of training the GEM system is optimized. Our experimental evaluations show that the GEM system provides highly accurate data mappings (over 90% accuracy) for real datasets of thousands of data elements each, in the Alzheimer's disease research domain. Further, the effort in training the system for new datasets is also optimized. We are currently employing the GEM system to map Alzheimer's disease datasets from around the globe into a common representation, as part of a global Alzheimer's disease integrated data sharing and analysis network called GAAIN1. GEM achieves significantly higher data mapping accuracy for biomedical datasets compared to other state-of-the-art tools for database schema matching that have similar functionality. With the use of active-learning capabilities, the user effort in training the system is minimal. PMID:26793094

  8. MOST-visualization: software for producing automated textbook-style maps of genome-scale metabolic networks.

    PubMed

    Kelley, James J; Maor, Shay; Kim, Min Kyung; Lane, Anatoliy; Lun, Desmond S

    2017-08-15

    Visualization of metabolites, reactions and pathways in genome-scale metabolic networks (GEMs) can assist in understanding cellular metabolism. Three attributes are desirable in software used for visualizing GEMs: (i) automation, since GEMs can be quite large; (ii) production of understandable maps that provide ease in identification of pathways, reactions and metabolites; and (iii) visualization of the entire network to show how pathways are interconnected. No software currently exists for visualizing GEMs that satisfies all three characteristics, but MOST-Visualization, an extension of the software package MOST (Metabolic Optimization and Simulation Tool), satisfies (i), and by using a pre-drawn overview map of metabolism based on the Roche map satisfies (ii) and comes close to satisfying (iii). MOST is distributed for free on the GNU General Public License. The software and full documentation are available at http://most.ccib.rutgers.edu/. dslun@rutgers.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI

    NASA Astrophysics Data System (ADS)

    Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.

    2015-03-01

    Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p < 0.0001). Conclusion: The proposed automated pipeline can be used to generate regional pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.

  10. Identifying problems and generating recommendations for enhancing complex systems: applying the abstraction hierarchy framework as an analytical tool.

    PubMed

    Xu, Wei

    2007-12-01

    This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.

  11. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    NASA Astrophysics Data System (ADS)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  12. Automated Quantification of Gradient Defined Features

    DTIC Science & Technology

    2008-09-01

    defined features in submarine environments. The technique utilizes MATLAB scripts to convert bathymetry data into a gradient dataset, produce gradient...maps, and most importantly, automate the process of defining and characterizing gradient defined features such as flows, faults, landslide scarps, folds...convergent plate margin hosts a series of large serpentinite mud volcanoes (Fig. 1). One of the largest of these active mud volcanoes is Big Blue

  13. Shape indexes for semi-automated detection of windbreaks in thematic tree cover maps from the central United States

    Treesearch

    Greg C. Liknes; Dacia M. Meneguzzo; Todd A. Kellerman

    2017-01-01

    Windbreaks are an important ecological resource across the large expanse of agricultural land in the central United States and are often planted in straight-line or L-shaped configurations to serve specific functions. As high-resolution (i.e., <5 m) land cover datasets become more available for these areas, semi-or fully-automated methods for distinguishing...

  14. Improved predictive mapping of indoor radon concentrations using ensemble regression trees based on automatic clustering of geological units.

    PubMed

    Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios Gruson, Martha; Baechler, Sébastien

    2015-09-01

    According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables like soil gas radon measurements as well as more detailed geological information. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. An integrated approach for automated cover-type mapping of large inaccessible areas in Alaska

    USGS Publications Warehouse

    Fleming, Michael D.

    1988-01-01

    The lack of any detailed cover type maps in the state necessitated that a rapid and accurate approach to be employed to develop maps for 329 million acres of Alaska within a seven-year period. This goal has been addressed by using an integrated approach to computer-aided analysis which combines efficient use of field data with the only consistent statewide spatial data sets available: Landsat multispectral scanner data, digital elevation data derived from 1:250 000-scale maps, and 1:60 000-scale color-infrared aerial photographs.

  16. Global Rapid Flood Mapping System with Spaceborne SAR Data

    NASA Astrophysics Data System (ADS)

    Yun, S. H.; Owen, S. E.; Hua, H.; Agram, P. S.; Fattahi, H.; Liang, C.; Manipon, G.; Fielding, E. J.; Rosen, P. A.; Webb, F.; Simons, M.

    2017-12-01

    As part of the Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards, at NASA's Jet Propulsion Laboratory and California Institute of Technology, we have developed an automated system that produces derived products for flood extent map generation using spaceborne SAR data. The system takes user's input of area of interest polygons and time window for SAR data search (pre- and post-event). Then the system automatically searches and downloads SAR data, processes them to produce coregistered SAR image pairs, and generates log amplitude ratio images from each pair. Currently the system is automated to support SAR data from the European Space Agency's Sentinel-1A/B satellites. We have used the system to produce flood extent maps from Sentinel-1 SAR data for the May 2017 Sri Lanka floods, which killed more than 200 people and displaced about 600,000 people. Our flood extent maps were delivered to the Red Cross to support response efforts. Earlier we also responded to the historic August 2016 Louisiana floods in the United States, which claimed 13 people's lives and caused over $10 billion property damage. For this event, we made synchronized observations from space, air, and ground in close collaboration with USGS and NOAA. The USGS field crews acquired ground observation data, and NOAA acquired high-resolution airborne optical imagery within the time window of +/-2 hours of the SAR data acquisition by JAXA's ALOS-2 satellite. The USGS coordinates of flood water boundaries were used to calibrate our flood extent map derived from the ALOS-2 SAR data, and the map was delivered to FEMA for estimating the number of households affected. Based on the lessons learned from this response effort, we customized the ARIA system automation for rapid flood mapping and developed a mobile friendly web app that can easily be used in the field for data collection. Rapid automatic generation of SAR-based global flood maps calibrated with independent observations from ground, air, and space will provide reliable snapshot extent of many flooding events. SAR missions with easy data access, such as the Sentinel-1 and NASA's upcoming NISAR mission, combined with the ARIA system, will enable forming a library of flood extent maps, which can soon support flood modeling community, by providing observation-based constraints.

  17. Tamarisk Mapping and Monitoring Using High Resolution Satellite Imagery

    Treesearch

    Jason W. San Souci; John T. Doyle

    2006-01-01

    QuickBird high resolution multispectral satellite imagery (60 cm GSD, 4 spectral bands) and calibrated products from DigitalGlobe’s AgroWatch program were used as inputs to Visual Learning System’s Feature Analyst automated feature extraction software to map localized occurrences of pervasive and aggressive Tamarisk (Tamarix ramosissima), an invasive...

  18. Automated method for measuring the extent of selective logging damage with airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Melendy, L.; Hagen, S. C.; Sullivan, F. B.; Pearson, T. R. H.; Walker, S. M.; Ellis, P.; Kustiyo; Sambodo, Ari Katmoko; Roswintiarti, O.; Hanson, M. A.; Klassen, A. W.; Palace, M. W.; Braswell, B. H.; Delgado, G. M.

    2018-05-01

    Selective logging has an impact on the global carbon cycle, as well as on the forest micro-climate, and longer-term changes in erosion, soil and nutrient cycling, and fire susceptibility. Our ability to quantify these impacts is dependent on methods and tools that accurately identify the extent and features of logging activity. LiDAR-based measurements of these features offers significant promise. Here, we present a set of algorithms for automated detection and mapping of critical features associated with logging - roads/decks, skid trails, and gaps - using commercial airborne LiDAR data as input. The automated algorithm was applied to commercial LiDAR data collected over two logging concessions in Kalimantan, Indonesia in 2014. The algorithm results were compared to measurements of the logging features collected in the field soon after logging was complete. The automated algorithm-mapped road/deck and skid trail features match closely with features measured in the field, with agreement levels ranging from 69% to 99% when adjusting for GPS location error. The algorithm performed most poorly with gaps, which, by their nature, are variable due to the unpredictable impact of tree fall versus the linear and regular features directly created by mechanical means. Overall, the automated algorithm performs well and offers significant promise as a generalizable tool useful to efficiently and accurately capture the effects of selective logging, including the potential to distinguish reduced impact logging from conventional logging.

  19. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    PubMed

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  1. Open-Source Automated Mapping Four-Point Probe.

    PubMed

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  2. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease

    PubMed Central

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.

    2016-01-01

    Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708

  3. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease.

    PubMed

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T

    2016-04-07

    Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. Copyright © 2016 by the American Society of Nephrology.

  4. Acoustic mapping of shallow water gas releases using shipborne multibeam systems

    NASA Astrophysics Data System (ADS)

    Urban, Peter; Köser, Kevin; Weiß, Tim; Greinert, Jens

    2015-04-01

    Water column imaging (WCI) shipborne multibeam systems are effective tools for investigating marine free gas (bubble) release. Like single- and splitbeam systems they are very sensitive towards gas bubbles in the water column, and have the advantage of the wide swath opening angle, 120° or more allowing a better mapping and possible 3D investigations of targets in the water column. On the downside, WCI data are degraded by specific noise from side-lobe effects and are usually not calibrated for target backscattering strength analysis. Most approaches so far concentrated on manual investigations of bubbles in the water column data. Such investigations allow the detection of bubble streams (flares) and make it possible to get an impression about the strength of detected flares/the gas release. Because of the subjective character of these investigations it is difficult to understand how well an area has been investigated by a flare mapping survey and subjective impressions about flare strength can easily be fooled by the many acoustic effects multibeam systems create. Here we present a semi-automated approach that uses the behavior of bubble streams in varying water currents to detect and map their exact source positions. The focus of the method is application of objective rules for flare detection, which makes it possible to extract information about the quality of the seepage mapping survey, perform automated noise reduction and create acoustic maps with quality discriminators indicating how well an area has been mapped.

  5. Intraoperative Subcortical Electrical Mapping of the Optic Tract in Awake Surgery Using a Virtual Reality Headset.

    PubMed

    Mazerand, Edouard; Le Renard, Marc; Hue, Sophie; Lemée, Jean-Michel; Klinger, Evelyne; Menei, Philippe

    2017-01-01

    Brain mapping during awake craniotomy is a well-known technique to preserve neurological functions, especially the language. It is still challenging to map the optic radiations due to the difficulty to test the visual field intraoperatively. To assess the visual field during awake craniotomy, we developed the Functions' Explorer based on a virtual reality headset (FEX-VRH). The impaired visual field of 10 patients was tested with automated perimetry (the gold standard examination) and the FEX-VRH. The proof-of-concept test was done during the surgery performed on a patient who was blind in his right eye and presenting with a left parietotemporal glioblastoma. The FEX-VRH was used intraoperatively, simultaneously with direct subcortical electrostimulation, allowing identification and preservation of the optic radiations. The FEX-VRH detected 9 of the 10 visual field defects found by automated perimetry. The patient who underwent an awake craniotomy with intraoperative mapping of the optic tract using the FEX-VRH had no permanent postoperative visual field defect. Intraoperative visual field assessment with the FEX-VRH during direct subcortical electrostimulation is a promising approach to mapping the optical radiations and preventing a permanent visual field defect during awake surgery for epilepsy or tumor. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    PubMed

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  7. Detection of the nipple in automated 3D breast ultrasound using coronal slab-average-projection and cumulative probability map

    NASA Astrophysics Data System (ADS)

    Kim, Hannah; Hong, Helen

    2014-03-01

    We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.

  8. Solvation Structure and Thermodynamic Mapping (SSTMap): An Open-Source, Flexible Package for the Analysis of Water in Molecular Dynamics Trajectories.

    PubMed

    Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom

    2018-01-09

    We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.

  9. Towards data integration automation for the French rare disease registry.

    PubMed

    Maaroufi, Meriem; Choquet, Rémy; Landais, Paul; Jaulent, Marie-Christine

    2015-01-01

    Building a medical registry upon an existing infrastructure and rooted practices is not an easy task. It is the case for the BNDMR project, the French rare disease registry, that aims to collect administrative and medical data of rare disease patients seen in different hospitals. To avoid duplicating data entry for health professionals, the project plans to deploy connectors with the existing systems to automatically retrieve data. Given the data heterogeneity and the large number of source systems, the automation of connectors creation is required. In this context, we propose a methodology that optimizes the use of existing alignment approaches in the data integration processes. The generated mappings are formalized in exploitable mapping expressions. Following this methodology, a process has been experimented on specific data types of a source system: Boolean and predefined lists. As a result, effectiveness of the used alignment approach has been enhanced and more good mappings have been detected. Nonetheless, further improvements could be done to deal with the semantic issue and process other data types.

  10. AEDs at your fingertips: automated external defibrillators on college campuses and a novel approach for increasing accessibility.

    PubMed

    Berger, Ryan J; O'Shea, Jesse G

    2014-01-01

    The use of automated external defibrillators (AEDs) increases survival in cardiac arrest events. Due to the success of previous efforts and free, readily available mobile mapping software, the discussion is to emphasize the importance of the use of AEDs to prevent sudden cardiac arrest-related deaths on college campuses and abroad, while suggesting a novel approach to aiding in access and awareness issues. A user-friendly mobile application (a low-cost iOS map) was developed at Florida State University to decrease AED retrieval distance and time. The development of mobile AED maps is feasible for a variety of universities and other entities, with the potential to save lives. Just having AEDs installed is not enough--they need to be easily locatable. Society increasingly relies on phones to provide information, and there are opportunities to use mobile technology to locate and share information about relevant emergency devices; these should be incorporated into the chain of survival.

  11. MAIN software for density averaging, model building, structure refinement and validation

    PubMed Central

    Turk, Dušan

    2013-01-01

    MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458

  12. Automated structure refinement of macromolecular assemblies from cryo-EM maps using Rosetta.

    PubMed

    Wang, Ray Yu-Ruei; Song, Yifan; Barad, Benjamin A; Cheng, Yifan; Fraser, James S; DiMaio, Frank

    2016-09-26

    Cryo-EM has revealed the structures of many challenging yet exciting macromolecular assemblies at near-atomic resolution (3-4.5Å), providing biological phenomena with molecular descriptions. However, at these resolutions, accurately positioning individual atoms remains challenging and error-prone. Manually refining thousands of amino acids - typical in a macromolecular assembly - is tedious and time-consuming. We present an automated method that can improve the atomic details in models that are manually built in near-atomic-resolution cryo-EM maps. Applying the method to three systems recently solved by cryo-EM, we are able to improve model geometry while maintaining the fit-to-density. Backbone placement errors are automatically detected and corrected, and the refinement shows a large radius of convergence. The results demonstrate that the method is amenable to structures with symmetry, of very large size, and containing RNA as well as covalently bound ligands. The method should streamline the cryo-EM structure determination process, providing accurate and unbiased atomic structure interpretation of such maps.

  13. Automated peroperative assessment of stents apposition from OCT pullbacks.

    PubMed

    Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent

    2015-04-01

    This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Automated cell-type classification in intact tissues by single-cell molecular profiling

    PubMed Central

    2018-01-01

    A major challenge in biology is identifying distinct cell classes and mapping their interactions in vivo. Tissue-dissociative technologies enable deep single cell molecular profiling but do not provide spatial information. We developed a proximity ligation in situ hybridization technology (PLISH) with exceptional signal strength, specificity, and sensitivity in tissue. Multiplexed data sets can be acquired using barcoded probes and rapid label-image-erase cycles, with automated calculation of single cell profiles, enabling clustering and anatomical re-mapping of cells. We apply PLISH to expression profile ~2900 cells in intact mouse lung, which identifies and localizes known cell types, including rare ones. Unsupervised classification of the cells indicates differential expression of ‘housekeeping’ genes between cell types, and re-mapping of two sub-classes of Club cells highlights their segregated spatial domains in terminal airways. By enabling single cell profiling of various RNA species in situ, PLISH can impact many areas of basic and medical research. PMID:29319504

  15. Towards data integration automation for the French rare disease registry

    PubMed Central

    Maaroufi, Meriem; Choquet, Rémy; Landais, Paul; Jaulent, Marie-Christine

    2015-01-01

    Building a medical registry upon an existing infrastructure and rooted practices is not an easy task. It is the case for the BNDMR project, the French rare disease registry, that aims to collect administrative and medical data of rare disease patients seen in different hospitals. To avoid duplicating data entry for health professionals, the project plans to deploy connectors with the existing systems to automatically retrieve data. Given the data heterogeneity and the large number of source systems, the automation of connectors creation is required. In this context, we propose a methodology that optimizes the use of existing alignment approaches in the data integration processes. The generated mappings are formalized in exploitable mapping expressions. Following this methodology, a process has been experimented on specific data types of a source system: Boolean and predefined lists. As a result, effectiveness of the used alignment approach has been enhanced and more good mappings have been detected. Nonetheless, further improvements could be done to deal with the semantic issue and process other data types. PMID:26958224

  16. Automated segmentation of chronic stroke lesions using LINDA: Lesion Identification with Neighborhood Data Analysis

    PubMed Central

    Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian

    2015-01-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  17. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  18. Wetland delineation with IKONOS high-resolution satellite imagery, Fort Custer Training Center, Battle Creek, Michigan, 2005

    USGS Publications Warehouse

    Fuller, L.M.; Morgan, T.R.; Aichele, Stephen S.

    2006-01-01

    The Michigan Army National Guard’s Fort Custer Training Center (FCTC) in Battle Creek, Mich., has the responsibility to protect wetland resources on the training grounds while providing training opportunities, and for future development planning at the facility. The National Wetlands Inventory (NWI) data have been the primary wetland-boundary resource, but a check on scale and accuracy of the wetland boundary information for the Fort Custer Training Center was needed. In cooperation with the FCTC, the U.S. Geological Survey (USGS) used an early spring IKONOS pan-sharpened satellite image to delineate the wetlands and create a more accurate wetland map for the FCTC. The USGS tested automated approaches (supervised and unsupervised classifications) to identify the wetland areas from the IKONOS satellite image, but the automated approaches alone did not yield accurate results. To ensure accurate wetland boundaries, the final wetland map was manually digitized on the basis of the automated supervised and unsupervised classifications, in combination with NWI data, field verifications, and visual interpretation of the IKONOS satellite image. The final wetland areas digitized from the IKONOS satellite imagery were similar to those in NWI; however, the wetland boundaries differed in some areas, a few wetlands mapped on the NWI were determined not to be wetlands from the IKONOS image and field verification, and additional previously unmapped wetlands not recognized by the NWI were identified from the IKONOS image.

  19. Automated reference-free detection of motion artifacts in magnetic resonance images.

    PubMed

    Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios

    2018-04-01

    Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.

  20. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  1. A new method for automated high-dimensional lesion segmentation evaluated in vascular injury and applied to the human occipital lobe.

    PubMed

    Mah, Yee-Haur; Jager, Rolf; Kennard, Christopher; Husain, Masud; Nachev, Parashkev

    2014-07-01

    Making robust inferences about the functional neuroanatomy of the brain is critically dependent on experimental techniques that examine the consequences of focal loss of brain function. Unfortunately, the use of the most comprehensive such technique-lesion-function mapping-is complicated by the need for time-consuming and subjective manual delineation of the lesions, greatly limiting the practicability of the approach. Here we exploit a recently-described general measure of statistical anomaly, zeta, to devise a fully-automated, high-dimensional algorithm for identifying the parameters of lesions within a brain image given a reference set of normal brain images. We proceed to evaluate such an algorithm in the context of diffusion-weighted imaging of the commonest type of lesion used in neuroanatomical research: ischaemic damage. Summary performance metrics exceed those previously published for diffusion-weighted imaging and approach the current gold standard-manual segmentation-sufficiently closely for fully-automated lesion-mapping studies to become a possibility. We apply the new method to 435 unselected images of patients with ischaemic stroke to derive a probabilistic map of the pattern of damage in lesions involving the occipital lobe, demonstrating the variation of anatomical resolvability of occipital areas so as to guide future lesion-function studies of the region. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Assessment of a User Guide for One Semi-Automated Forces (OneSAF) Version 2.0

    DTIC Science & Technology

    2009-09-01

    OneSAF uses a two-dimensional feature named a Plan View Display ( PVD ) as the primary graphical interface. The PVD replicates a map with a series...primary interface, the PVD is how the user watches the scenario unfold and requires the most interaction with the user. As seen in Table 3, all...participant indicated never using these seven map-related functions. Graphic control measures. Graphic control measures are applied to the PVD map to

  3. A Framework for Automated Digital Forensic Reporting

    DTIC Science & Technology

    2009-03-01

    provide a simple way to extract local accounts from a full system image. Unix, Linux and the BSD variants store user accounts in the /etc/ passwd file...with hashes of the user passwords in the /etc/shadow file for linux or /etc/master.passwd for BSD. /etc/ passwd also contains mappings from usernames to... passwd file may not map directly to real-world names, it can be a crucial link in this eventual mapping. Following are two examples where it could prove

  4. Open-Source Programming for Automated Generation of Graphene Raman Spectral Maps

    NASA Astrophysics Data System (ADS)

    Vendola, P.; Blades, M.; Pierre, W.; Jedlicka, S.; Rotkin, S. V.

    Raman microscopy is a useful tool for studying the structural characteristics of graphene deposited onto substrates. However, extracting useful information from the Raman spectra requires data processing and 2D map generation. An existing home-built confocal Raman microscope was optimized for graphene samples and programmed to automatically generate Raman spectral maps across a specified area. In particular, an open source data collection scheme was generated to allow the efficient collection and analysis of the Raman spectral data for future use. NSF ECCS-1509786.

  5. Object-based classification of semi-arid wetlands

    NASA Astrophysics Data System (ADS)

    Halabisky, Meghan; Moskal, L. Monika; Hall, Sonia A.

    2011-01-01

    Wetlands are valuable ecosystems that benefit society. However, throughout history wetlands have been converted to other land uses. For this reason, timely wetland maps are necessary for developing strategies to protect wetland habitat. The goal of this research was to develop a time-efficient, automated, low-cost method to map wetlands in a semi-arid landscape that could be scaled up for use at a county or state level, and could lay the groundwork for expanding to forested areas. Therefore, it was critical that the research project contain two components: accurate automated feature extraction and the use of low-cost imagery. For that reason, we tested the effectiveness of geographic object-based image analysis (GEOBIA) to delineate and classify wetlands using freely available true color aerial photographs provided through the National Agriculture Inventory Program. The GEOBIA method produced an overall accuracy of 89% (khat = 0.81), despite the absence of infrared spectral data. GEOBIA provides the automation that can save significant resources when scaled up while still providing sufficient spatial resolution and accuracy to be useful to state and local resource managers and policymakers.

  6. Evaluation of automated urban surface water extraction from Sentinel-2A imagery using different water indices

    NASA Astrophysics Data System (ADS)

    Yang, Xiucheng; Chen, Li

    2017-04-01

    Urban surface water is characterized by complex surface continents and small size of water bodies, and the mapping of urban surface water is currently a challenging task. The moderate-resolution remote sensing satellites provide effective ways of monitoring surface water. This study conducts an exploratory evaluation on the performance of the newly available Sentinel-2A multispectral instrument (MSI) imagery for detecting urban surface water. An automatic framework that integrates pixel-level threshold adjustment and object-oriented segmentation is proposed. Based on the automated workflow, different combinations of visible, near infrared, and short-wave infrared bands in Sentinel-2 image via different water indices are first compared. Results show that object-level modified normalized difference water index (MNDWI with band 11) and automated water extraction index are feasible in urban surface water mapping for Sentinel-2 MSI imagery. Moreover, comparative results are obtained utilizing optimal MNDWI from Sentinel-2 and Landsat 8 images, respectively. Consequently, Sentinel-2 MSI achieves the kappa coefficient of 0.92, compared with that of 0.83 from Landsat 8 operational land imager.

  7. Micro Autonomous Systems Research: Systems Engineering Processes for Micro Autonomous Systems

    DTIC Science & Technology

    2016-11-01

    product family design and reconfigurable system design with recent developments in the fields of automated manufacturing and micro-autonomous...mapped to design parameters. These mappings are the mechanism by which physical product designs are formulated. Finally, manufacture of the product ... design tools and manufacturing and testing the resulting design . The final products were inspected and flight tested so that their

  8. High-throughput physical mapping of chromosomes using automated in situ hybridization.

    PubMed

    George, Phillip; Sharakhova, Maria V; Sharakhov, Igor V

    2012-06-28

    Projects to obtain whole-genome sequences for 10,000 vertebrate species and for 5,000 insect and related arthropod species are expected to take place over the next 5 years. For example, the sequencing of the genomes for 15 malaria mosquitospecies is currently being done using an Illumina platform. This Anopheles species cluster includes both vectors and non-vectors of malaria. When the genome assemblies become available, researchers will have the unique opportunity to perform comparative analysis for inferring evolutionary changes relevant to vector ability. However, it has proven difficult to use next-generation sequencing reads to generate high-quality de novo genome assemblies. Moreover, the existing genome assemblies for Anopheles gambiae, although obtained using the Sanger method, are gapped or fragmented. Success of comparative genomic analyses will be limited if researchers deal with numerous sequencing contigs, rather than with chromosome-based genome assemblies. Fragmented, unmapped sequences create problems for genomic analyses because: (i) unidentified gaps cause incorrect or incomplete annotation of genomic sequences; (ii) unmapped sequences lead to confusion between paralogous genes and genes from different haplotypes; and (iii) the lack of chromosome assignment and orientation of the sequencing contigs does not allow for reconstructing rearrangement phylogeny and studying chromosome evolution. Developing high-resolution physical maps for species with newly sequenced genomes is a timely and cost-effective investment that will facilitate genome annotation, evolutionary analysis, and re-sequencing of individual genomes from natural populations. Here, we present innovative approaches to chromosome preparation, fluorescent in situ hybridization (FISH), and imaging that facilitate rapid development of physical maps. Using An. gambiae as an example, we demonstrate that the development of physical chromosome maps can potentially improve genome assemblies and, thus, the quality of genomic analyses. First, we use a high-pressure method to prepare polytene chromosome spreads. This method, originally developed for Drosophila, allows the user to visualize more details on chromosomes than the regular squashing technique. Second, a fully automated, front-end system for FISH is used for high-throughput physical genome mapping. The automated slide staining system runs multiple assays simultaneously and dramatically reduces hands-on time. Third, an automatic fluorescent imaging system, which includes a motorized slide stage, automatically scans and photographs labeled chromosomes after FISH. This system is especially useful for identifying and visualizing multiple chromosomal plates on the same slide. In addition, the scanning process captures a more uniform FISH result. Overall, the automated high-throughput physical mapping protocol is more efficient than a standard manual protocol.

  9. Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology.

    PubMed

    Cruzan, Mitchell B; Weinstein, Ben G; Grasty, Monica R; Kohrn, Brendan F; Hendrickson, Elizabeth C; Arredondo, Tina M; Thompson, Pamela G

    2016-09-01

    Low-elevation surveys with small aerial drones (micro-unmanned aerial vehicles [UAVs]) may be used for a wide variety of applications in plant ecology, including mapping vegetation over small- to medium-sized regions. We provide an overview of methods and procedures for conducting surveys and illustrate some of these applications. Aerial images were obtained by flying a small drone along transects over the area of interest. Images were used to create a composite image (orthomosaic) and a digital surface model (DSM). Vegetation classification was conducted manually and using an automated routine. Coverage of an individual species was estimated from aerial images. We created a vegetation map for the entire region from the orthomosaic and DSM, and mapped the density of one species. Comparison of our manual and automated habitat classification confirmed that our mapping methods were accurate. A species with high contrast to the background matrix allowed adequate estimate of its coverage. The example surveys demonstrate that small aerial drones are capable of gathering large amounts of information on the distribution of vegetation and individual species with minimal impact to sensitive habitats. Low-elevation aerial surveys have potential for a wide range of applications in plant ecology.

  10. Remote imagery for unmanned ground vehicles: the future of path planning for ground robotics

    NASA Astrophysics Data System (ADS)

    Frederick, Philip A.; Theisen, Bernard L.; Ward, Derek

    2006-10-01

    Remote Imagery for Unmanned Ground Vehicles (RIUGV) uses a combination of high-resolution multi-spectral satellite imagery and advanced commercial off-the-self (COTS) object-oriented image processing software to provide automated terrain feature extraction and classification. This information, along with elevation data, infrared imagery, a vehicle mobility model and various meta-data (local weather reports, Zobler Soil map, etc...), is fed into automated path planning software to provide a stand-alone ability to generate rapidly updateable dynamic mobility maps for Manned or Unmanned Ground Vehicles (MGVs or UGVs). These polygon based mobility maps can reside on an individual platform or a tactical network. When new information is available, change files are generated and ingested into existing mobility maps based on user selected criteria. Bandwidth concerns are mitigated by the use of shape files for the representation of the data (e.g. each object in the scene is represented by a shape file and thus can be transmitted individually). User input (desired level of stealth, required time of arrival, etc...) determines the priority in which objects are tagged for updates. This paper will also discuss the planned July 2006 field experiment.

  11. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  12. Statewide Cellular Coverage Map

    DOT National Transportation Integrated Search

    2002-02-01

    The role of wireless communications in transportation is becoming increasingly important. Wireless communications are critical for many applications of Intelligent Transportation Systems (ITS) such as Automatic Vehicle Location (AVL) and Automated Co...

  13. User’s manual for the Automated Data Assurance and Management application developed for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The generation of Everglades Depth Estimation Network (EDEN) daily water-level and water-depth maps is dependent on high quality real-time data from over 240 water-level stations. To increase the accuracy of the daily water-surface maps, the Automated Data Assurance and Management (ADAM) tool was created by the U.S. Geological Survey as part of Greater Everglades Priority Ecosystems Science. The ADAM tool is used to provide accurate quality-assurance review of the real-time data from the EDEN network and allows estimation or replacement of missing or erroneous data. This user’s manual describes how to install and operate the ADAM software. File structure and operation of the ADAM software is explained using examples.

  14. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  15. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  16. Status and plans of the Department of the Interior EROS program

    USGS Publications Warehouse

    ,

    1975-01-01

    The Earth Resources Observation Systems (EROS) Program of the Department of the Interior has been actively participating in the LANDSAT (formerly ERTS) program and other investigations with remotely sensed data. A large number of applications have been demonstrated that can assist in the discovery of nonrenewable resources, monitoring areal extent of renewable resources, monitoring environmental change, and in providing repetitive data for planimetric revision of small-scale maps and maps showing land cover classes. A new and potentially revolutionary approach, that of "automated cartography," has been initiated through the versatile nature of the data available from LANDSAT. "Automated cartography" as used here refers to the ability to automatically extract land cover classes and relate these classes to geographic position.

  17. Computer vision-based diameter maps to study fluoroscopic recordings of small intestinal motility from conscious experimental animals.

    PubMed

    Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R

    2017-08-01

    When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.

  18. Current trends in geomorphological mapping

    NASA Astrophysics Data System (ADS)

    Seijmonsbergen, A. C.

    2012-04-01

    Geomorphological mapping is a world currently in motion, driven by technological advances and the availability of new high resolution data. As a consequence, classic (paper) geomorphological maps which were the standard for more than 50 years are rapidly being replaced by digital geomorphological information layers. This is witnessed by the following developments: 1. the conversion of classic paper maps into digital information layers, mainly performed in a digital mapping environment such as a Geographical Information System, 2. updating the location precision and the content of the converted maps, by adding more geomorphological details, taken from high resolution elevation data and/or high resolution image data, 3. (semi) automated extraction and classification of geomorphological features from digital elevation models, broadly separated into unsupervised and supervised classification techniques and 4. New digital visualization / cartographic techniques and reading interfaces. Newly digital geomorphological information layers can be based on manual digitization of polygons using DEMs and/or aerial photographs, or prepared through (semi) automated extraction and delineation of geomorphological features. DEMs are often used as basis to derive Land Surface Parameter information which is used as input for (un) supervised classification techniques. Especially when using high-res data, object-based classification is used as an alternative to traditional pixel-based classifications, to cluster grid cells into homogeneous objects, which can be classified as geomorphological features. Classic map content can also be used as training material for the supervised classification of geomorphological features. In the classification process, rule-based protocols, including expert-knowledge input, are used to map specific geomorphological features or entire landscapes. Current (semi) automated classification techniques are increasingly able to extract morphometric, hydrological, and in the near future also morphogenetic information. As a result, these new opportunities have changed the workflows for geomorphological mapmaking, and their focus have shifted from field-based techniques to using more computer-based techniques: for example, traditional pre-field air-photo based maps are now replaced by maps prepared in a digital mapping environment, and designated field visits using mobile GIS / digital mapping devices now focus on gathering location information and attribute inventories and are strongly time efficient. The resulting 'modern geomorphological maps' are digital collections of geomorphological information layers consisting of georeferenced vector, raster and tabular data which are stored in a digital environment such as a GIS geodatabase, and are easily visualized as e.g. 'birds' eye' views, as animated 3D displays, on virtual globes, or stored as GeoPDF maps in which georeferenced attribute information can be easily exchanged over the internet. Digital geomorphological information layers are increasingly accessed via web-based services distributed through remote servers. Information can be consulted - or even build using remote geoprocessing servers - by the end user. Therefore, it will not only be the geomorphologist anymore, but also the professional end user that dictates the applied use of digital geomorphological information layers.

  19. Semi-automated quantification and neuroanatomical mapping of heterogeneous cell populations.

    PubMed

    Mendez, Oscar A; Potter, Colin J; Valdez, Michael; Bello, Thomas; Trouard, Theodore P; Koshy, Anita A

    2018-07-15

    Our group studies the interactions between cells of the brain and the neurotropic parasite Toxoplasma gondii. Using an in vivo system that allows us to permanently mark and identify brain cells injected with Toxoplasma protein, we have identified that Toxoplasma-injected neurons (TINs) are heterogeneously distributed throughout the brain. Unfortunately, standard methods to quantify and map heterogeneous cell populations onto a reference brain atlas are time consuming and prone to user bias. We developed a novel MATLAB-based semi-automated quantification and mapping program to allow the rapid and consistent mapping of heterogeneously distributed cells on to the Allen Institute Mouse Brain Atlas. The system uses two-threshold background subtraction to identify and quantify cells of interest. We demonstrate that we reliably quantify and neuroanatomically localize TINs with low intra- or inter-observer variability. In a follow up experiment, we show that specific regions of the mouse brain are enriched with TINs. The procedure we use takes advantage of simple immunohistochemistry labeling techniques, use of a standard microscope with a motorized stage, and low cost computing that can be readily obtained at a research institute. To our knowledge there is no other program that uses such readily available techniques and equipment for mapping heterogeneous populations of cells across the whole mouse brain. The quantification method described here allows reliable visualization, quantification, and mapping of heterogeneous cell populations in immunolabeled sections across whole mouse brains. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning

    PubMed Central

    Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L.; Sunaert, Stefan

    2016-01-01

    Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899

  1. Current trends in satellite based emergency mapping - the need for harmonisation

    NASA Astrophysics Data System (ADS)

    Voigt, Stefan

    2013-04-01

    During the past years, the availability and use of satellite image data to support disaster management and humanitarian relief organisations has largely increased. The automation and data processing techniques are greatly improving as well as the capacity in accessing and processing satellite imagery in getting better globally. More and more global activities via the internet and through global organisations like the United Nations or the International Charter Space and Major Disaster engage in the topic, while at the same time, more and more national or local centres engage rapid mapping operations and activities. In order to make even more effective use of this very positive increase of capacity, for the sake of operational provision of analysis results, for fast validation of satellite derived damage assessments, for better cooperation in the joint inter agency generation of rapid mapping products and for general scientific use, rapid mapping results in general need to be better harmonized, if not even standardized. In this presentation, experiences from various years of rapid mapping gained by the DLR Center for satellite based Crisis Information (ZKI) within the context of the national activities, the International Charter Space and Major Disasters, GMES/Copernicus etc. are reported. Furthermore, an overview on how automation, quality assurance and optimization can be achieved through standard operation procedures within a rapid mapping workflow is given. Building on this long term rapid mapping experience, and building on the DLR initiative to set in pace an "International Working Group on Satellite Based Emergency Mapping" current trends in rapid mapping are discussed and thoughts on how the sharing of rapid mapping information can be optimized by harmonizing analysis results and data structures are presented. Such an harmonization of analysis procedures, nomenclatures and representations of data as well as meta data are the basis to better cooperate within the global rapid mapping community throughout local/national, regional/supranational and global scales

  2. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  3. Evaluating pixel and object based image classification techniques for mapping plant invasions from UAV derived aerial imagery: Harrisia pomanensis as a case study

    NASA Astrophysics Data System (ADS)

    Mafanya, Madodomzi; Tsele, Philemon; Botai, Joel; Manyama, Phetole; Swart, Barend; Monate, Thabang

    2017-07-01

    Invasive alien plants (IAPs) not only pose a serious threat to biodiversity and water resources but also have impacts on human and animal wellbeing. To support decision making in IAPs monitoring, semi-automated image classifiers which are capable of extracting valuable information in remotely sensed data are vital. This study evaluated the mapping accuracies of supervised and unsupervised image classifiers for mapping Harrisia pomanensis (a cactus plant commonly known as the Midnight Lady) using two interlinked evaluation strategies i.e. point and area based accuracy assessment. Results of the point-based accuracy assessment show that with reference to 219 ground control points, the supervised image classifiers (i.e. Maxver and Bhattacharya) mapped H. pomanensis better than the unsupervised image classifiers (i.e. K-mediuns, Euclidian Length and Isoseg). In this regard, user and producer accuracies were 82.4% and 84% respectively for the Maxver classifier. The user and producer accuracies for the Bhattacharya classifier were 90% and 95.7%, respectively. Though the Maxver produced a higher overall accuracy and Kappa estimate than the Bhattacharya classifier, the Maxver Kappa estimate of 0.8305 is not significantly (statistically) greater than the Bhattacharya Kappa estimate of 0.8088 at a 95% confidence interval. The area based accuracy assessment results show that the Bhattacharya classifier estimated the spatial extent of H. pomanensis with an average mapping accuracy of 86.1% whereas the Maxver classifier only gave an average mapping accuracy of 65.2%. Based on these results, the Bhattacharya classifier is therefore recommended for mapping H. pomanensis. These findings will aid in the algorithm choice making for the development of a semi-automated image classification system for mapping IAPs.

  4. Object-Based Classification of Ikonos Imagery for Mapping Large-Scale Vegetation Communities in Urban Areas.

    PubMed

    Mathieu, Renaud; Aryal, Jagannath; Chong, Albert K

    2007-11-20

    Effective assessment of biodiversity in cities requires detailed vegetation maps.To date, most remote sensing of urban vegetation has focused on thematically coarse landcover products. Detailed habitat maps are created by manual interpretation of aerialphotographs, but this is time consuming and costly at large scale. To address this issue, wetested the effectiveness of object-based classifications that use automated imagesegmentation to extract meaningful ground features from imagery. We applied thesetechniques to very high resolution multispectral Ikonos images to produce vegetationcommunity maps in Dunedin City, New Zealand. An Ikonos image was orthorectified and amulti-scale segmentation algorithm used to produce a hierarchical network of image objects.The upper level included four coarse strata: industrial/commercial (commercial buildings),residential (houses and backyard private gardens), vegetation (vegetation patches larger than0.8/1ha), and water. We focused on the vegetation stratum that was segmented at moredetailed level to extract and classify fifteen classes of vegetation communities. The firstclassification yielded a moderate overall classification accuracy (64%, κ = 0.52), which ledus to consider a simplified classification with ten vegetation classes. The overallclassification accuracy from the simplified classification was 77% with a κ value close tothe excellent range (κ = 0.74). These results compared favourably with similar studies inother environments. We conclude that this approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes. It is an efficient way to generate accurate and detailed maps in significantly shorter time. The final map accuracy could be improved by integrating segmentation, automated and manual classification in the mapping process, especially when considering important vegetation classes with limited spectral contrast.

  5. Fast, Automated, Photo realistic, 3D Modeling of Building Interiors

    DTIC Science & Technology

    2016-09-12

    project, we developed two algorithmic pipelines for GPS-denied indoor mobile 3D mapping using an ambulatory backpack system. By mounting scanning...equipment on a backpack system, a human operator can traverse the interior of a building to produce a high-quality 3D reconstruction. In each of our...Unlimited UU UU UU UU 12-09-2016 1-May-2011 30-Jun-2015 Final Report: Fast, Automated, Photo-realistic, 3D Modeling of Building Interiors (ATTN

  6. Functional Specifications to an Automated Retinal Scanner for Use in Plotting the Vascular Map

    DTIC Science & Technology

    1988-12-01

    available an aid in the early detection and continuing treatment of diabetes . Therefore, it is the distinct wish of the author that this system provide some...choroid, it may be possible to detect diabetes earlier and stem the tide of retinopathy in those patients so afflicted. Additionally, retinal...Subject Terms (continue on reverse i necessary and identify t block number) Retinal Imaging, Automation, Infrared, Diabetic Retinopathy , Field I Group I

  7. Automated Wildfire Detection Through Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang, Zhenping; Chi, Yuechen

    2005-01-01

    Wildfires have a profound impact upon the biosphere and our society in general. They cause loss of life, destruction of personal property and natural resources and alter the chemistry of the atmosphere. In response to the concern over the consequences of wildland fire and to support the fire management community, the National Oceanic and Atmospheric Administration (NOAA), National Environmental Satellite, Data and Information Service (NESDIS) located in Camp Springs, Maryland gradually developed an operational system to routinely monitor wildland fire by satellite observations. The Hazard Mapping System, as it is known today, allows a team of trained fire analysts to examine and integrate, on a daily basis, remote sensing data from Geostationary Operational Environmental Satellite (GOES), Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite sensors and generate a 24 hour fire product for the conterminous United States. Although assisted by automated fire detection algorithms, N O M has not been able to eliminate the human element from their fire detection procedures. As a consequence, the manually intensive effort has prevented NOAA from transitioning to a global fire product as urged particularly by climate modelers. NASA at Goddard Space Flight Center in Greenbelt, Maryland is helping N O M more fully automate the Hazard Mapping System by training neural networks to mimic the decision-making process of the frre analyst team as well as the automated algorithms.

  8. CLIPS: A tool for corn disease diagnostic system and an aid to neural network for automated knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Wu, Cathy; Taylor, Pam; Whitson, George; Smith, Cathy

    1990-01-01

    This paper describes the building of a corn disease diagnostic expert system using CLIPS, and the development of a neural expert system using the fact representation method of CLIPS for automated knowledge acquisition. The CLIPS corn expert system diagnoses 21 diseases from 52 symptoms and signs with certainty factors. CLIPS has several unique features. It allows the facts in rules to be broken down to object-attribute-value (OAV) triples, allows rule-grouping, and fires rules based on pattern-matching. These features combined with the chained inference engine result to a natural user query system and speedy execution. In order to develop a method for automated knowledge acquisition, an Artificial Neural Expert System (ANES) is developed by a direct mapping from the CLIPS system. The ANES corn expert system uses the same OAV triples in the CLIPS system for its facts. The LHS and RHS facts of the CLIPS rules are mapped into the input and output layers of the ANES, respectively; and the inference engine of the rules is imbedded in the hidden layer. The fact representation by OAC triples gives a natural grouping of the rules. These features allow the ANES system to automate rule-generation, and make it efficient to execute and easy to expand for a large and complex domain.

  9. Open-Source Automated Mapping Four-Point Probe

    PubMed Central

    Chandra, Handy; Allen, Spencer W.; Oberloier, Shane W.; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M.

    2017-01-01

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas. PMID:28772471

  10. FreeSurfer-initiated fully-automated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping.

    PubMed

    Khan, Ali R; Wang, Lei; Beg, Mirza Faisal

    2008-07-01

    Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.

  11. Investigation of availability and accessibility of community automated external defibrillators in a territory in Hong Kong.

    PubMed

    Ho, C L; Lui, C T; Tsui, K L; Kam, C W

    2014-10-01

    To evaluate the availability and accessibility of community automated external defibrillators in a territory in Hong Kong. Cross-sectional study. Two public hospitals in New Territories West Cluster in Hong Kong. Information about the locations of community automated external defibrillators was obtained from automated external defibrillator suppliers and through community search. Data on locations of out-of-hospital cardiac arrests from August 2010 to September 2013 were obtained from the local cardiac arrest registry of the emergency departments of two hospitals. Sites of both automated external defibrillators and out-of-hospital cardiac arrests were geographically coded and mapped. The number of out-of-hospital cardiac arrests within 100 m of automated external defibrillators per year and the proportion of out-of-hospital cardiac arrests with accessible automated external defibrillators (100 m) were calculated. The number of community automated external defibrillators per 10,000 population and public access defibrillation rate were also calculated and compared with those in other countries. There were a total of 207 community automated external defibrillators in the territory. The number of automated external defibrillators per 10,000 population was 1.942. All facilities with automated external defibrillators in this territory had more than 0.2 out-of-hospital cardiac arrests per automated external defibrillator per year within 100 m. Among all out-of-hospital cardiac arrests, 25.2% could have an automated external defibrillator reachable within 100 m. The public access defibrillation rate was 0.168%. The number and accessibility of community automated external defibrillators in this territory are comparable to those in other developed countries. The placement site of community automated external defibrillators is cost-effective. However, the public access defibrillation rate is low.

  12. Hurricane Jeanne

    Atmospheric Science Data Center

    2013-04-19

    ... view. The cloud height map was produced by automated computer recognition of the distinctive spatial features between images ... NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, D.C. The Terra spacecraft is managed ...

  13. NASA Global Flood Mapping System

    NASA Technical Reports Server (NTRS)

    Policelli, Fritz; Slayback, Dan; Brakenridge, Bob; Nigro, Joe; Hubbard, Alfred

    2017-01-01

    Product utility key factors: Near real time, automated production; Flood spatial extent Cloudiness Pixel resolution: 250m; Flood temporal extent; Flash floods short duration on ground?; Landcover--Water under vegetation cover vs open water

  14. Automated MAD and MIR structure solution

    PubMed Central

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations. PMID:10089316

  15. Oxygen-controlled automated neural differentiation of mouse embryonic stem cells.

    PubMed

    Mondragon-Teran, Paul; Tostoes, Rui; Mason, Chris; Lye, Gary J; Veraitch, Farlan S

    2013-03-01

    Automation and oxygen tension control are two tools that provide significant improvements to the reproducibility and efficiency of stem cell production processes. the aim of this study was to establish a novel automation platform capable of controlling oxygen tension during both the cell-culture and liquid-handling steps of neural differentiation processes. We built a bespoke automation platform, which enclosed a liquid-handling platform in a sterile, oxygen-controlled environment. An airtight connection was used to transfer cell culture plates to and from an automated oxygen-controlled incubator. Our results demonstrate that our system yielded comparable cell numbers, viabilities, metabolism profiles and differentiation efficiencies when compared with traditional manual processes. Interestingly, eliminating exposure to ambient conditions during the liquid-handling stage resulted in significant improvements in the yield of MAP2-positive neural cells, indicating that this level of control can improve differentiation processes. This article describes, for the first time, an automation platform capable of maintaining oxygen tension control during both the cell-culture and liquid-handling stages of a 2D embryonic stem cell differentiation process.

  16. A post-processing system for automated rectification and registration of spaceborne SAR imagery

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Kwok, Ronald; Pang, Shirley S.

    1987-01-01

    An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.

  17. Dynamic mapping of EDDL device descriptions to OPC UA

    NASA Astrophysics Data System (ADS)

    Atta Nsiah, Kofi; Schappacher, Manuel; Sikora, Axel

    2017-07-01

    OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.

  18. Automated Mapping of Flood Events in the Mississippi River Basin Utilizing NASA Earth Observations

    NASA Technical Reports Server (NTRS)

    Bartkovich, Mercedes; Baldwin-Zook, Helen Blue; Cruz, Dashiell; McVey, Nicholas; Ploetz, Chris; Callaway, Olivia

    2017-01-01

    The Mississippi River Basin is the fourth largest drainage basin in the world, and is susceptible to multi-level flood events caused by heavy precipitation, snow melt, and changes in water table levels. Conducting flood analysis during periods of disaster is a challenging endeavor for NASA's Short-term Prediction Research and Transition Center (SPoRT), Federal Emergency Management Agency (FEMA), and the U.S. Geological Survey's Hazards Data Distribution Systems (USGS HDDS) due to heavily-involved research and lack of manpower. During this project, an automated script was generated that performs high-level flood analysis to relieve the workload for end-users. The script incorporated Landsat 8 Operational Land Imager (OLI) tiles and utilized computer-learning techniques to generate accurate water extent maps. The script referenced the Moderate Resolution Imaging Spectroradiometer (MODIS) land-water mask to isolate areas of flood induced waters. These areas were overlaid onto the National Land Cover Database's (NLCD) land cover data, the Oak Ridge National Laboratory's LandScan data, and Homeland Infrastructure Foundation-Level Data (HIFLD) to determine the classification of areas impacted and the population density affected by flooding. The automated algorithm was initially tested on the September 2016 flood event that occurred in Upper Mississippi River Basin, and was then further tested on multiple flood events within the Mississippi River Basin. This script allows end users to create their own flood probability and impact maps for disaster mitigation and recovery efforts.

  19. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  20. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  1. PepLine: a software pipeline for high-throughput direct mapping of tandem mass spectrometry data on genomic sequences.

    PubMed

    Ferro, Myriam; Tardif, Marianne; Reguer, Erwan; Cahuzac, Romain; Bruley, Christophe; Vermat, Thierry; Nugues, Estelle; Vigouroux, Marielle; Vandenbrouck, Yves; Garin, Jérôme; Viari, Alain

    2008-05-01

    PepLine is a fully automated software which maps MS/MS fragmentation spectra of trypsic peptides to genomic DNA sequences. The approach is based on Peptide Sequence Tags (PSTs) obtained from partial interpretation of QTOF MS/MS spectra (first module). PSTs are then mapped on the six-frame translations of genomic sequences (second module) giving hits. Hits are then clustered to detect potential coding regions (third module). Our work aimed at optimizing the algorithms of each component to allow the whole pipeline to proceed in a fully automated manner using raw nucleic acid sequences (i.e., genomes that have not been "reduced" to a database of ORFs or putative exons sequences). The whole pipeline was tested on controlled MS/MS spectra sets from standard proteins and from Arabidopsis thaliana envelope chloroplast samples. Our results demonstrate that PepLine competed with protein database searching softwares and was fast enough to potentially tackle large data sets and/or high size genomes. We also illustrate the potential of this approach for the detection of the intron/exon structure of genes.

  2. Automated Quantitative Nuclear Cardiology Methods

    PubMed Central

    Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

    2016-01-01

    Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779

  3. Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology1

    PubMed Central

    Cruzan, Mitchell B.; Weinstein, Ben G.; Grasty, Monica R.; Kohrn, Brendan F.; Hendrickson, Elizabeth C.; Arredondo, Tina M.; Thompson, Pamela G.

    2016-01-01

    Premise of the study: Low-elevation surveys with small aerial drones (micro–unmanned aerial vehicles [UAVs]) may be used for a wide variety of applications in plant ecology, including mapping vegetation over small- to medium-sized regions. We provide an overview of methods and procedures for conducting surveys and illustrate some of these applications. Methods: Aerial images were obtained by flying a small drone along transects over the area of interest. Images were used to create a composite image (orthomosaic) and a digital surface model (DSM). Vegetation classification was conducted manually and using an automated routine. Coverage of an individual species was estimated from aerial images. Results: We created a vegetation map for the entire region from the orthomosaic and DSM, and mapped the density of one species. Comparison of our manual and automated habitat classification confirmed that our mapping methods were accurate. A species with high contrast to the background matrix allowed adequate estimate of its coverage. Discussion: The example surveys demonstrate that small aerial drones are capable of gathering large amounts of information on the distribution of vegetation and individual species with minimal impact to sensitive habitats. Low-elevation aerial surveys have potential for a wide range of applications in plant ecology. PMID:27672518

  4. Development of a New Branded UK Food Composition Database for an Online Dietary Assessment Tool

    PubMed Central

    Carter, Michelle C.; Hancock, Neil; Albar, Salwa A.; Brown, Helen; Greenwood, Darren C.; Hardie, Laura J.; Frost, Gary S.; Wark, Petra A.; Cade, Janet E.

    2016-01-01

    The current UK food composition tables are limited, containing ~3300 mostly generic food and drink items. To reflect the wide range of food products available to British consumers and to potentially improve accuracy of dietary assessment, a large UK specific electronic food composition database (FCDB) has been developed. A mapping exercise has been conducted that matched micronutrient data from generic food codes to “Back of Pack” data from branded food products using a semi-automated process. After cleaning and processing, version 1.0 of the new FCDB contains 40,274 generic and branded items with associated 120 macronutrient and micronutrient data and 5669 items with portion images. Over 50% of food and drink items were individually mapped to within 10% agreement with the generic food item for energy. Several quality checking procedures were applied after mapping including; identifying foods above and below the expected range for a particular nutrient within that food group and cross-checking the mapping of items such as concentrated and raw/dried products. The new electronic FCDB has substantially increased the size of the current, publically available, UK food tables. The FCDB has been incorporated into myfood24, a new fully automated online dietary assessment tool and, a smartphone application for weight loss. PMID:27527214

  5. Development of a New Branded UK Food Composition Database for an Online Dietary Assessment Tool.

    PubMed

    Carter, Michelle C; Hancock, Neil; Albar, Salwa A; Brown, Helen; Greenwood, Darren C; Hardie, Laura J; Frost, Gary S; Wark, Petra A; Cade, Janet E

    2016-08-05

    The current UK food composition tables are limited, containing ~3300 mostly generic food and drink items. To reflect the wide range of food products available to British consumers and to potentially improve accuracy of dietary assessment, a large UK specific electronic food composition database (FCDB) has been developed. A mapping exercise has been conducted that matched micronutrient data from generic food codes to "Back of Pack" data from branded food products using a semi-automated process. After cleaning and processing, version 1.0 of the new FCDB contains 40,274 generic and branded items with associated 120 macronutrient and micronutrient data and 5669 items with portion images. Over 50% of food and drink items were individually mapped to within 10% agreement with the generic food item for energy. Several quality checking procedures were applied after mapping including; identifying foods above and below the expected range for a particular nutrient within that food group and cross-checking the mapping of items such as concentrated and raw/dried products. The new electronic FCDB has substantially increased the size of the current, publically available, UK food tables. The FCDB has been incorporated into myfood24, a new fully automated online dietary assessment tool and, a smartphone application for weight loss.

  6. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  7. ACME, a GIS tool for Automated Cirque Metric Extraction

    NASA Astrophysics Data System (ADS)

    Spagnolo, Matteo; Pellitero, Ramon; Barr, Iestyn D.; Ely, Jeremy C.; Pellicer, Xavier M.; Rea, Brice R.

    2017-02-01

    Regional scale studies of glacial cirque metrics provide key insights on the (palaeo) environment related to the formation of these erosional landforms. The growing availability of high resolution terrain models means that more glacial cirques can be identified and mapped in the future. However, the extraction of their metrics still largely relies on time consuming manual techniques or the combination of, more or less obsolete, GIS tools. In this paper, a newly coded toolbox is provided for the automated, and comparatively quick, extraction of 16 key glacial cirque metrics; including length, width, circularity, planar and 3D area, elevation, slope, aspect, plan closure and hypsometry. The set of tools, named ACME (Automated Cirque Metric Extraction), is coded in Python, runs in one of the most commonly used GIS packages (ArcGIS) and has a user friendly interface. A polygon layer of mapped cirques is required for all metrics, while a Digital Terrain Model and a point layer of cirque threshold midpoints are needed to run some of the tools. Results from ACME are comparable to those from other techniques and can be obtained rapidly, allowing large cirque datasets to be analysed and potentially important regional trends highlighted.

  8. FluidCam 1&2 - UAV-based Fluid Lensing Instruments for High-Resolution 3D Subaqueous Imaging and Automated Remote Biosphere Assessment of Reef Ecosystems

    NASA Astrophysics Data System (ADS)

    Chirayath, V.; Instrella, R.

    2016-02-01

    We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.

  9. FluidCam 1&2 - UAV-Based Fluid Lensing Instruments for High-Resolution 3D Subaqueous Imaging and Automated Remote Biosphere Assessment of Reef Ecosystems

    NASA Astrophysics Data System (ADS)

    Chirayath, V.

    2015-12-01

    We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.

  10. Whole-brain activity mapping onto a zebrafish brain atlas.

    PubMed

    Randlett, Owen; Wee, Caroline L; Naumann, Eva A; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E; Portugues, Ruben; Lacoste, Alix M B; Riegler, Clemens; Engert, Florian; Schier, Alexander F

    2015-11-01

    In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open-source atlas containing molecular labels and definitions of anatomical regions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated extracellular signal–regulated kinase (ERK) as a readout of neural activity, we have developed a system to create and contextualize whole-brain maps of stimulus- and behavior-dependent neural activity. This mitogen-activated protein kinase (MAP)-mapping assay is technically simple, and data analysis is completely automated. Because MAP-mapping is performed on freely swimming fish, it is applicable to studies of nearly any stimulus or behavior. Here we demonstrate our high-throughput approach using pharmacological, visual and noxious stimuli, as well as hunting and feeding. The resultant maps outline hundreds of areas associated with behaviors.

  11. California: San Joaquin Valley

    Atmospheric Science Data Center

    2014-05-15

    ...     View Larger Image This illustration features Multi-angle Imaging SpectroRadiometer ... quadrant is a map of haze amount determined from automated processing of the MISR imagery. Low amounts of haze are shown in blue, and a ...

  12. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  13. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  14. Managing mapping data using commercial data base management software.

    USGS Publications Warehouse

    Elassal, A.A.

    1985-01-01

    Electronic computers are involved in almost every aspect of the map making process. This involvement has become so thorough that it is practically impossible to find a recently developed process or device in the mapping field which does not employ digital processing in some form or another. This trend, which has been evolving over two decades, is accelerated by the significant improvements in capility, reliability, and cost-effectiveness of electronic devices. Computerized mapping processes and devices share a common need for machine readable data. Integrating groups of these components into automated mapping systems requires careful planning for data flow amongst them. Exploring the utility of commercial data base management software to assist in this task is the subject of this paper. -Author

  15. High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.

    2016-09-23

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less

  16. Towards the automated reduction and calibration of SCUBA data from the James Clerk Maxwell Telescope

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N. E.; Robson, E. I.

    2002-10-01

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used to investigate instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data, with particular emphasis on `jiggle-map' observations of compact sources. We demonstrate the validity of our automated approach at both 850 and 450 μm, and apply it to several of the JCMT secondary flux calibrators. We determine light curves for the variable sources IRC +10216 and OH 231.8. This automation is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on the United Kingdom Infrared Telescope (UKIRT) and the JCMT.

  17. Automating Flood Hazard Mapping Methods for Near Real-time Storm Surge Inundation and Vulnerability Assessment

    NASA Astrophysics Data System (ADS)

    Weigel, A. M.; Griffin, R.; Gallagher, D.

    2015-12-01

    Storm surge has enough destructive power to damage buildings and infrastructure, erode beaches, and threaten human life across large geographic areas, hence posing the greatest threat of all the hurricane hazards. The United States Gulf of Mexico has proven vulnerable to hurricanes as it has been hit by some of the most destructive hurricanes on record. With projected rises in sea level and increases in hurricane activity, there is a need to better understand the associated risks for disaster mitigation, preparedness, and response. GIS has become a critical tool in enhancing disaster planning, risk assessment, and emergency response by communicating spatial information through a multi-layer approach. However, there is a need for a near real-time method of identifying areas with a high risk of being impacted by storm surge. Research was conducted alongside Baron, a private industry weather enterprise, to facilitate automated modeling and visualization of storm surge inundation and vulnerability on a near real-time basis. This research successfully automated current flood hazard mapping techniques using a GIS framework written in a Python programming environment, and displayed resulting data through an Application Program Interface (API). Data used for this methodology included high resolution topography, NOAA Probabilistic Surge model outputs parsed from Rich Site Summary (RSS) feeds, and the NOAA Census tract level Social Vulnerability Index (SoVI). The development process required extensive data processing and management to provide high resolution visualizations of potential flooding and population vulnerability in a timely manner. The accuracy of the developed methodology was assessed using Hurricane Isaac as a case study, which through a USGS and NOAA partnership, contained ample data for statistical analysis. This research successfully created a fully automated, near real-time method for mapping high resolution storm surge inundation and vulnerability for the Gulf of Mexico, and improved the accuracy and resolution of the Probabilistic Storm Surge model.

  18. An automated dose tracking system for adaptive radiation therapy.

    PubMed

    Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J

    2018-02-01

    The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Automated T2 relaxometry of the hippocampus for temporal lobe epilepsy.

    PubMed

    Winston, Gavin P; Vos, Sjoerd B; Burdett, Jane L; Cardoso, M Jorge; Ourselin, Sebastien; Duncan, John S

    2017-09-01

    Hippocampal sclerosis (HS), the most common cause of refractory temporal lobe epilepsy, is associated with hippocampal volume loss and increased T2 signal. These can be identified on quantitative imaging with hippocampal volumetry and T2 relaxometry. Although hippocampal segmentation for volumetry has been automated, T2 relaxometry currently involves subjective and time-consuming manual delineation of regions of interest. In this work, we develop and validate an automated technique for hippocampal T2 relaxometry. Fifty patients with unilateral or bilateral HS and 50 healthy controls underwent T 1 -weighted and dual-echo fast recovery fast spin echo scans. Hippocampi were automatically segmented using a multi-atlas-based segmentation algorithm (STEPS) and a template database. Voxelwise T2 maps were determined using a monoexponential fit. The hippocampal segmentations were registered to the T2 maps and eroded to reduce partial volume effect. Voxels with T2 >170 msec excluded to minimize cerebrospinal fluid (CSF) contamination. Manual determination of T2 values was performed twice in each subject. Twenty controls underwent repeat scans to assess interscan reproducibility. Hippocampal T2 values were reliably determined using the automated method. There was a significant ipsilateral increase in T2 values in HS (p < 0.001), and a smaller but significant contralateral increase. The combination of hippocampal volumes and T2 values separated the groups well. There was a strong correlation between automated and manual methods for hippocampal T2 measurement (0.917 left, 0.896 right, both p < 0.001). Interscan reproducibility was superior for automated compared to manual measurements. Automated hippocampal segmentation can be reliably extended to the determination of hippocampal T2 values, and a combination of hippocampal volumes and T2 values can separate subjects with HS from healthy controls. There is good agreement with manual measurements, and the technique is more reproducible on repeat scans than manual measurement. This protocol can be readily introduced into a clinical workflow for the assessment of patients with focal epilepsy. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.

  20. MapFactory - Towards a mapping design pattern for big geospatial data

    NASA Astrophysics Data System (ADS)

    Rautenbach, Victoria; Coetzee, Serena

    2018-05-01

    With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information - Metadata - Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data.

  1. Automated segmentation of midbrain structures with high iron content.

    PubMed

    Garzón, Benjamín; Sitnikov, Rouslan; Bäckman, Lars; Kalpouzos, Grégoria

    2018-04-15

    The substantia nigra (SN), the subthalamic nucleus (STN), and the red nucleus (RN) are midbrain structures of ample interest in many neuroimaging studies, which may benefit from the availability of automated segmentation methods. The high iron content of these structures awards them high contrast in quantitative susceptibility mapping (QSM) images. We present a novel segmentation method that leverages the information of these images to produce automated segmentations of the SN, STN, and RN. The algorithm builds a map of spatial priors for the structures by non-linearly registering a set of manually-traced training labels to the midbrain. The priors are used to inform a Gaussian mixture model of the image intensities, with smoothness constraints imposed to ensure anatomical plausibility. The method was validated on manual segmentations from a sample of 40 healthy younger and older subjects. Average Dice scores were 0.81 (0.05) for the SN, 0.66 (0.14) for the STN and 0.88 (0.04) for the RN in the left hemisphere, and similar values were obtained for the right hemisphere. In all structures, volumes of manual and automatically obtained segmentations were significantly correlated. The algorithm showed lower accuracy on R 2 * and T 2 -weighted Fluid Attenuated Inversion Recovery (FLAIR) images, which are also sensitive to iron content. To illustrate an application of the method, we show that the automated segmentations were comparable to the manual ones regarding detection of age-related differences to putative iron content. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Mapping Cortical Laminar Structure in the 3D BigBrain.

    PubMed

    Wagstyl, Konrad; Lepage, Claude; Bludau, Sebastian; Zilles, Karl; Fletcher, Paul C; Amunts, Katrin; Evans, Alan C

    2018-07-01

    Histological sections offer high spatial resolution to examine laminar architecture of the human cerebral cortex; however, they are restricted by being 2D, hence only regions with sufficiently optimal cutting planes can be analyzed. Conversely, noninvasive neuroimaging approaches are whole brain but have relatively low resolution. Consequently, correct 3D cross-cortical patterns of laminar architecture have never been mapped in histological sections. We developed an automated technique to identify and analyze laminar structure within the high-resolution 3D histological BigBrain. We extracted white matter and pial surfaces, from which we derived histologically verified surfaces at the layer I/II boundary and within layer IV. Layer IV depth was strongly predicted by cortical curvature but varied between areas. This fully automated 3D laminar analysis is an important requirement for bridging high-resolution 2D cytoarchitecture and in vivo 3D neuroimaging. It lays the foundation for in-depth, whole-brain analyses of cortical layering.

  3. Shotgun Optical Maps of the Whole Escherichia coli O157:H7 Genome

    PubMed Central

    Lim, Alex; Dimalanta, Eileen T.; Potamousis, Konstantinos D.; Yen, Galex; Apodoca, Jennifer; Tao, Chunhong; Lin, Jieyi; Qi, Rong; Skiadas, John; Ramanathan, Arvind; Perna, Nicole T.; Plunkett, Guy; Burland, Valerie; Mau, Bob; Hackett, Jeremiah; Blattner, Frederick R.; Anantharaman, Thomas S.; Mishra, Bhubaneswar; Schwartz, David C.

    2001-01-01

    We have constructed NheI and XhoI optical maps of Escherichia coli O157:H7 solely from genomic DNA molecules to provide a uniquely valuable scaffold for contig closure and sequence validation. E. coli O157:H7 is a common pathogen found in contaminated food and water. Our approach obviated the need for the analysis of clones, PCR products, and hybridizations, because maps were constructed from ensembles of single DNA molecules. Shotgun sequencing of bacterial genomes remains labor-intensive, despite advances in sequencing technology. This is partly due to manual intervention required during the last stages of finishing. The applicability of optical mapping to this problem was enhanced by advances in machine vision techniques that improved mapping throughput and created a path to full automation of mapping. Comparisons were made between maps and sequence data that characterized sequence gaps and guided nascent assemblies. PMID:11544203

  4. Automated mapping of the ocean floor using the theory of intrinsic random functions of order k

    USGS Publications Warehouse

    David, M.; Crozel, D.; Robb, James M.

    1986-01-01

    High-quality contour maps can be computer drawn from single track echo-sounding data by combining Universal Kriging and the theory of intrinsic random function of order K (IRFK). These methods interpolate values among the closely spaced points that lie along relatively widely spaced lines. The technique provides a variance which can be contoured as a quantitative measure of map precision. The technique can be used to evaluate alternative survey trackline configurations and data collection intervals, and can be applied to other types of oceanographic data. ?? 1986 D. Reidel Publishing Company.

  5. Land use studies with Skylab data, August 1974. [Baltimore, Maryland and Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Simonett, D. S. (Principal Investigator); Rohde, W. G.

    1974-01-01

    The author has identified the following significant results. Capabilities of Skylab photographic data suggest significant applications for: (1) identification and mapping of all primary, most secondary, and many tertiary land use classes; (2) stratification of the landscape for more detailed sampling; and (3) rapid updating of existing land use and vegetation maps subscaled at 1:25,000 and smaller with manual interpretation techniques. Automated thematic mapping of land use categories with electronic data processing techniques is feasible with the S-192 multispectral scanner, despite the high noise levels in many channels.

  6. Towards an EO-based Landslide Web Mapping and Monitoring Service

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Weinke, Elisabeth; Albrecht, Florian; Eisank, Clemens; Vecchiotti, Filippo; Friedl, Barbara; Kociu, Arben

    2017-04-01

    National and regional authorities and infrastructure maintainers in mountainous regions require accurate knowledge of the location and spatial extent of landslides for hazard and risk management. Information on landslides is often collected by a combination of ground surveying and manual image interpretation following landslide triggering events. However, the high workload and limited time for data acquisition result in a trade-off between completeness, accuracy and detail. Remote sensing data offers great potential for mapping and monitoring landslides in a fast and efficient manner. While facing an increased availability of high-quality Earth Observation (EO) data and new computational methods, there is still a lack in science-policy interaction and in providing innovative tools and methods that can easily be used by stakeholders and users to support their daily work. Taking up this issue, we introduce an innovative and user-oriented EO-based web service for landslide mapping and monitoring. Three central design components of the service are presented: (1) the user requirements definition, (2) the semi-automated image analysis methods implemented in the service, and (3) the web mapping application with its responsive user interface. User requirements were gathered during semi-structured interviews with regional authorities. The potential users were asked if and how they employ remote sensing data for landslide investigation and what their expectations to a landslide web mapping service regarding reliability and usability are. The interviews revealed the capability of our service for landslide documentation and mapping as well as monitoring of selected landslide sites, for example to complete and update landslide inventory maps. In addition, the users see a considerable potential for landslide rapid mapping. The user requirements analysis served as basis for the service concept definition. Optical satellite imagery from different high resolution (HR) and very high resolution (VHR) sensors, e.g. Landsat, Sentinel-2, SPOT-5, WorldView-2/3, was acquired for different study areas in the Alps. Object-based image analysis (OBIA) methods were used for semi-automated mapping of landslides. Selected mapping routines and results, including a step-by-step guidance, are integrated in the service by means of a web processing chain. This allows the user to gain insights into the service idea, the potential of semi-automated mapping methods, and the applicability of various satellite data for specific landslide mapping tasks. Moreover, an easy-to use and guided classification workflow, which includes image segmentation, statistical classification and manual editing options, enables the user to perform his/her own analyses. For validation, the classification results can be downloaded or compared against uploaded reference data using the implemented tools. Furthermore, users can compare the classification results to freely available data such as OpenStreetMap to identify landslide-affected infrastructure (e.g. roads, buildings). They also can upload infrastructure data available at their organization for specific assessments or monitor the evolution of selected landslides over time. Further actions will include the validation of the service in collaboration with stakeholders, decision makers and experts, which is essential to produce landslide information products that can assist the targeted management of natural hazards, and the evaluation of the potential towards the development of an operational Copernicus downstream service.

  7. Practical interpretation of CYP2D6 haplotypes: Comparison and integration of automated and expert calling.

    PubMed

    Ruaño, Gualberto; Kocherla, Mohan; Graydon, James S; Holford, Theodore R; Makowski, Gregory S; Goethe, John W

    2016-05-01

    We describe a population genetic approach to compare samples interpreted with expert calling (EC) versus automated calling (AC) for CYP2D6 haplotyping. The analysis represents 4812 haplotype calls based on signal data generated by the Luminex xMap analyzers from 2406 patients referred to a high-complexity molecular diagnostics laboratory for CYP450 testing. DNA was extracted from buccal swabs. We compared the results of expert calls (EC) and automated calls (AC) with regard to haplotype number and frequency. The ratio of EC to AC was 1:3. Haplotype frequencies from EC and AC samples were convergent across haplotypes, and their distribution was not statistically different between the groups. Most duplications required EC, as only expansions with homozygous or hemizygous haplotypes could be automatedly called. High-complexity laboratories can offer equivalent interpretation to automated calling for non-expanded CYP2D6 loci, and superior interpretation for duplications. We have validated scientific expert calling specified by scoring rules as standard operating procedure integrated with an automated calling algorithm. The integration of EC with AC is a practical strategy for CYP2D6 clinical haplotyping. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Automated adipose study for assessing cancerous human breast tissue using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Yao, Xinwen; Chang, Ernest W.; Bin Amir, Syed A.; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine P.

    2017-02-01

    Breast cancer is the third leading cause of death in women in the United States. In human breast tissue, adipose cells are infiltrated or replaced by cancer cells during the development of breast tumor. Therefore, an adipose map can be an indicator of identifying cancerous region. We developed an automated classification method to generate adipose map within human breast. To facilitate the automated classification, we first mask the B-scans from OCT volumes by comparing the signal noise ratio with a threshold. Then, the image was divided into multiple blocks with a size of 30 pixels by 30 pixels. In each block, we extracted texture features such as local standard deviation, entropy, homogeneity, and coarseness. The features of each block were input to a probabilistic model, relevance vector machine (RVM), which was trained prior to the experiment, to classify tissue types. For each block within the B-scan, RVM identified the region with adipose tissue. We calculated the adipose ratio as the number of blocks identified as adipose over the total number of blocks within the B-scan. We obtained OCT images from patients (n = 19) in Columbia medical center. We automatically generated the adipose maps from 24 B-scans including normal samples (n = 16) and cancerous samples (n = 8). We found the adipose regions show an isolated pattern that in cancerous tissue while a clustered pattern in normal tissue. Moreover, the adipose ratio (52.30 ± 29.42%) in normal tissue was higher than the that in cancerous tissue (12.41 ± 10.07%).

  9. Whole-brain activity mapping onto a zebrafish brain atlas

    PubMed Central

    Randlett, Owen; Wee, Caroline L.; Naumann, Eva A.; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E.; Portugues, Ruben; Lacoste, Alix M.B.; Riegler, Clemens; Engert, Florian; Schier, Alexander F.

    2015-01-01

    In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open source atlas containing molecular labels and anatomical region definitions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated-Extracellular signal-regulated kinase (ERK/MAPK) as a readout of neural activity, we have developed a system to create and contextualize whole brain maps of stimulus- and behavior-dependent neural activity. This MAP-Mapping (Mitogen Activated Protein kinase – Mapping) assay is technically simple, fast, inexpensive, and data analysis is completely automated. Since MAP-Mapping is performed on fish that are freely swimming, it is applicable to nearly any stimulus or behavior. We demonstrate the utility of our high-throughput approach using hunting/feeding, pharmacological, visual and noxious stimuli. The resultant maps outline hundreds of areas associated with behaviors. PMID:26778924

  10. Hyperspectral image analysis for plant stress detection

    USDA-ARS?s Scientific Manuscript database

    Abiotic and disease-induced stress significantly reduces plant productivity. Automated on-the-go mapping of plant stress allows timely intervention and mitigating of the problem before critical thresholds are exceeded, thereby, maximizing productivity. A hyperspectral camera analyzed the spectral ...

  11. Self Consistent Bathymetric Mapping From Robotic Vehicles in the Deep Ocean

    DTIC Science & Technology

    2005-06-01

    that have been aligned in a consistent manner. Experimental results from the fully automated processing of a multibeam survey over the TAG hydrothermal structure at the Mid-Atlantic ridge are presented to validate the proposed method.

  12. From Open Geographical Data to Tangible Maps: Improving the Accessibility of Maps for Visually Impaired People

    NASA Astrophysics Data System (ADS)

    Ducasse, J.; Macé, M.; Jouffrais, C.

    2015-08-01

    Visual maps must be transcribed into (interactive) raised-line maps to be accessible for visually impaired people. However, these tactile maps suffer from several shortcomings: they are long and expensive to produce, they cannot display a large amount of information, and they are not dynamically modifiable. A number of methods have been developed to automate the production of raised-line maps, but there is not yet any tactile map editor on the market. Tangible interactions proved to be an efficient way to help a visually impaired user manipulate spatial representations. Contrary to raised-line maps, tangible maps can be autonomously constructed and edited. In this paper, we present the scenarios and the main expected contributions of the AccessiMap project, which is based on the availability of many sources of open spatial data: 1/ facilitating the production of interactive tactile maps with the development of an open-source web-based editor; 2/ investigating the use of tangible interfaces for the autonomous construction and exploration of a map by a visually impaired user.

  13. NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.

  14. On automating domain connectivity for overset grids

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1994-01-01

    An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximated body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian Systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems.

  15. Pilot Weather Advisor System

    NASA Technical Reports Server (NTRS)

    Lindamood, Glenn; Martzaklis, Konstantinos Gus; Hoffler, Keith; Hill, Damon; Mehrotra, Sudhir C.; White, E. Richard; Fisher, Bruce D.; Crabill, Norman L.; Tucholski, Allen D.

    2006-01-01

    The Pilot Weather Advisor (PWA) system is an automated satellite radio-broadcasting system that provides nearly real-time weather data to pilots of aircraft in flight anywhere in the continental United States. The system was designed to enhance safety in two distinct ways: First, the automated receipt of information would relieve the pilot of the time-consuming and distracting task of obtaining weather information via voice communication with ground stations. Second, the presentation of the information would be centered around a map format, thereby making the spatial and temporal relationships in the surrounding weather situation much easier to understand

  16. Mapping Eroded Areas on Mountain Grassland with Terrestrial Photogrammetry and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens

    2016-06-01

    In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.

  17. Automated method to differentiate between native and mirror protein models obtained from contact maps.

    PubMed

    Kurczynska, Monika; Kotulska, Malgorzata

    2018-01-01

    Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters-the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models.

  18. Automated method to differentiate between native and mirror protein models obtained from contact maps

    PubMed Central

    Kurczynska, Monika

    2018-01-01

    Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters–the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models. PMID:29787567

  19. Detection of myocardial ischemia by automated, motion-corrected, color-encoded perfusion maps compared with visual analysis of adenosine stress cardiovascular magnetic resonance imaging at 3 T: a pilot study.

    PubMed

    Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O

    2013-09-01

    The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis. Furthermore, this approach demonstrated higher interobserver and intraobserver reliability as well as a better time efficiency when compared to visual analysis.

  20. Automated Database Mediation Using Ontological Metadata Mappings

    PubMed Central

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801

  1. An Automated Pipeline for Engineering Many-Enzyme Pathways: Computational Sequence Design, Pathway Expression-Flux Mapping, and Scalable Pathway Optimization.

    PubMed

    Halper, Sean M; Cetnar, Daniel P; Salis, Howard M

    2018-01-01

    Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.

  2. Automated mapping of explosives particles in composition C-4 fingerprints.

    PubMed

    Verkouteren, Jennifer R; Coleman, Jessica L; Cho, Inho

    2010-03-01

    A method is described to perform automated mapping of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) particles in C-4 fingerprints. The method employs polarized light microscopy and image analysis to map the entire fingerprint and the distribution of RDX particles. This method can be used to evaluate a large number of fingerprints to aid in the development of threat libraries that can be used to determine performance requirements of explosive trace detectors. A series of 50 C-4 fingerprints were characterized, and results show that the number of particles varies significantly from print to print, and within a print. The particle size distributions can be used to estimate the mass of RDX in the fingerprint. These estimates were found to be within +/-26% relative of the results obtained from dissolution gas chromatography/micro-electron capture detection for four of six prints, which is quite encouraging for a particle counting approach. By evaluating the average mass and frequency of particles with respect to size for this series of fingerprints, we conclude that particles 10-20 microm in diameter could be targeted to improve detection of traces of C-4 explosives.

  3. Evaluating an Automated Approach for Monitoring Forest Disturbances in the Pacific Northwest from Logging, Fire and Insect Outbreaks with Landsat Time Series Data

    NASA Technical Reports Server (NTRS)

    R.Neigh, Christopher S.; Bolton, Douglas K.; Williams, Jennifer J.; Diabate, Mouhamad

    2014-01-01

    Forests are the largest aboveground sink for atmospheric carbon (C), and understanding how they change through time is critical to reduce our C-cycle uncertainties. We investigated a strong decline in Normalized Difference Vegetation Index (NDVI) from 1982 to 1991 in Pacific Northwest forests, observed with the National Ocean and Atmospheric Administration's (NOAA) series of Advanced Very High Resolution Radiometers (AVHRRs). To understand the causal factors of this decline, we evaluated an automated classification method developed for Landsat time series stacks (LTSS) to map forest change. This method included: (1) multiple disturbance index thresholds; and (2) a spectral trajectory-based image analysis with multiple confidence thresholds. We produced 48 maps and verified their accuracy with air photos, monitoring trends in burn severity data and insect aerial detection survey data. Area-based accuracy estimates for change in forest cover resulted in producer's and user's accuracies of 0.21 +/- 0.06 to 0.38 +/- 0.05 for insect disturbance, 0.23 +/- 0.07 to 1 +/- 0 for burned area and 0.74 +/- 0.03 to 0.76 +/- 0.03 for logging. We believe that accuracy was low for insect disturbance because air photo reference data were temporally sparse, hence missing some outbreaks, and the annual anniversary time step is not dense enough to track defoliation and progressive stand mortality. Producer's and user's accuracy for burned area was low due to the temporally abrupt nature of fire and harvest with a similar response of spectral indices between the disturbance index and normalized burn ratio. We conclude that the spectral trajectory approach also captures multi-year stress that could be caused by climate, acid deposition, pathogens, partial harvest, thinning, etc. Our study focused on understanding the transferability of previously successful methods to new ecosystems and found that this automated method does not perform with the same accuracy in Pacific Northwest forests. Using a robust accuracy assessment, we demonstrate the difficulty of transferring change attribution methods to other ecosystems, which has implications for the development of automated detection/attribution approaches. Widespread disturbance was found within AVHRR-negative anomalies, but identifying causal factors in LTSS with adequate mapping accuracy for fire and insects proved to be elusive. Our results provide a background framework for future studies to improve methods for the accuracy assessment of automated LTSS classifications.

  4. Instructor/Operator Station Design Study.

    DTIC Science & Technology

    1982-04-01

    components interact and are dependent one upon the other. A major issue in any design involving both hardware and software is establishing the proper...always begins at leg 1. The AUTOMATED TRAINING EXER- CISE MAP display calls up a map of the gaming area for the selected exer- cise. The PRINTOUT...select the size of the gaming area in nautical miles. When the Aircraft comes within 100 miles of an in-tune station, the approach display for the in

  5. Automated Identification of Coronal Holes from Synoptic EUV Maps

    NASA Astrophysics Data System (ADS)

    Hamada, Amr; Asikainen, Timo; Virtanen, Ilpo; Mursula, Kalevi

    2018-04-01

    Coronal holes (CHs) are regions of open magnetic field lines in the solar corona and the source of the fast solar wind. Understanding the evolution of coronal holes is critical for solar magnetism as well as for accurate space weather forecasts. We study the extreme ultraviolet (EUV) synoptic maps at three wavelengths (195 Å/193 Å, 171 Å and 304 Å) measured by the Solar and Heliospheric Observatory/Extreme Ultraviolet Imaging Telescope (SOHO/EIT) and the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) instruments. The two datasets are first homogenized by scaling the SDO/AIA data to the SOHO/EIT level by means of histogram equalization. We then develop a novel automated method to identify CHs from these homogenized maps by determining the intensity threshold of CH regions separately for each synoptic map. This is done by identifying the best location and size of an image segment, which optimally contains portions of coronal holes and the surrounding quiet Sun allowing us to detect the momentary intensity threshold. Our method is thus able to adjust itself to the changing scale size of coronal holes and to temporally varying intensities. To make full use of the information in the three wavelengths we construct a composite CH distribution, which is more robust than distributions based on one wavelength. Using the composite CH dataset we discuss the temporal evolution of CHs during the Solar Cycles 23 and 24.

  6. A continuous scale-space method for the automated placement of spot heights on maps

    NASA Astrophysics Data System (ADS)

    Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico

    2017-12-01

    Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.

  7. The automated reference toolset: A soil-geomorphic ecological potential matching algorithm

    USGS Publications Warehouse

    Nauman, Travis; Duniway, Michael C.

    2016-01-01

    Ecological inventory and monitoring data need referential context for interpretation. Identification of appropriate reference areas of similar ecological potential for site comparison is demonstrated using a newly developed automated reference toolset (ART). Foundational to identification of reference areas was a soil map of particle size in the control section (PSCS), a theme in US Soil Taxonomy. A 30-m resolution PSCS map of the Colorado Plateau (366,000 km2) was created by interpolating ∼5000 field soil observations using a random forest model and a suite of raster environmental spatial layers representing topography, climate, general ecological community, and satellite imagery ratios. The PSCS map had overall out of bag accuracy of 61.8% (Kappa of 0.54, p < 0.0001), and an independent validation accuracy of 93.2% at a set of 356 field plots along the southern edge of Canyonlands National Park, Utah. The ART process was also tested at these plots, and matched plots with the same ecological sites (ESs) 67% of the time where sites fell within 2-km buffers of each other. These results show that the PSCS and ART have strong application for ecological monitoring and sampling design, as well as assessing impacts of disturbance and land management action using an ecological potential framework. Results also demonstrate that PSCS could be a key mapping layer for the USDA-NRCS provisional ES development initiative.

  8. FOLD-EM: automated fold recognition in medium- and low-resolution (4-15 Å) electron density maps.

    PubMed

    Saha, Mitul; Morais, Marc C

    2012-12-15

    Owing to the size and complexity of large multi-component biological assemblies, the most tractable approach to determining their atomic structure is often to fit high-resolution radiographic or nuclear magnetic resonance structures of isolated components into lower resolution electron density maps of the larger assembly obtained using cryo-electron microscopy (cryo-EM). This hybrid approach to structure determination requires that an atomic resolution structure of each component, or a suitable homolog, is available. If neither is available, then the amount of structural information regarding that component is limited by the resolution of the cryo-EM map. However, even if a suitable homolog cannot be identified using sequence analysis, a search for structural homologs should still be performed because structural homology often persists throughout evolution even when sequence homology is undetectable, As macromolecules can often be described as a collection of independently folded domains, one way of searching for structural homologs would be to systematically fit representative domain structures from a protein domain database into the medium/low resolution cryo-EM map and return the best fits. Taken together, the best fitting non-overlapping structures would constitute a 'mosaic' backbone model of the assembly that could aid map interpretation and illuminate biological function. Using the computational principles of the Scale-Invariant Feature Transform (SIFT), we have developed FOLD-EM-a computational tool that can identify folded macromolecular domains in medium to low resolution (4-15 Å) electron density maps and return a model of the constituent polypeptides in a fully automated fashion. As a by-product, FOLD-EM can also do flexible multi-domain fitting that may provide insight into conformational changes that occur in macromolecular assemblies.

  9. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A new statistical methodology predicting chip failure probability considering electromigration

    NASA Astrophysics Data System (ADS)

    Sun, Ted

    In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.

  11. Planetary Exploration Rebooted! New Ways of Exploring the Moon, Mars and Beyond

    NASA Technical Reports Server (NTRS)

    Fong, Terrence W.

    2010-01-01

    In this talk, I will summarize how the NASA Ames Intelligent Robotics Group has been developing and field testing planetary robots for human exploration, creating automated planetary mapping systems, and engaging the public as citizen scientists.

  12. Digital Map Requirements For Automatic Vehicle Location

    DOT National Transportation Integrated Search

    1998-12-01

    New Jersey Transit (NJT) is currently investigating acquisition of an automated vehicle locator (AVL) system. The purpose of the AVL system is to monitor the location of buses. Knowing the location of a bus enables the agency to manage the bus fleet ...

  13. A semi-automated algorithm for hypothalamus volumetry in 3 Tesla magnetic resonance images.

    PubMed

    Wolff, Julia; Schindler, Stephanie; Lucas, Christian; Binninger, Anne-Sophie; Weinrich, Luise; Schreiber, Jan; Hegerl, Ulrich; Möller, Harald E; Leitzke, Marco; Geyer, Stefan; Schönknecht, Peter

    2018-07-30

    The hypothalamus, a small diencephalic gray matter structure, is part of the limbic system. Volumetric changes of this structure occur in psychiatric diseases, therefore there is increasing interest in precise volumetry. Based on our detailed volumetry algorithm for 7 Tesla magnetic resonance imaging (MRI), we developed a method for 3 Tesla MRI, adopting anatomical landmarks and work in triplanar view. We overlaid T1-weighted MR images with gray matter-tissue probability maps to combine anatomical information with tissue class segmentation. Then, we outlined regions of interest (ROIs) that covered potential hypothalamus voxels. Within these ROIs, seed growing technique helped define the hypothalamic volume using gray matter probabilities from the tissue probability maps. This yielded a semi-automated method with short processing times of 20-40 min per hypothalamus. In the MRIs of ten subjects, reliabilities were determined as intraclass correlations (ICC) and volume overlaps in percent. Three raters achieved very good intra-rater reliabilities (ICC 0.82-0.97) and good inter-rater reliabilities (ICC 0.78 and 0.82). Overlaps of intra- and inter-rater runs were very good (≥ 89.7%). We present a fast, semi-automated method for in vivo hypothalamus volumetry in 3 Tesla MRI. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Automating the SMAP Ground Data System to Support Lights-Out Operations

    NASA Technical Reports Server (NTRS)

    Sanders, Antonio

    2014-01-01

    The Soil Moisture Active Passive (SMAP) Mission is a first tier mission in NASA's Earth Science Decadal Survey. SMAP will provide a global mapping of soil moisture and its freeze/thaw states. This mapping will be used to enhance the understanding of processes that link the terrestrial water, energy, and carbon cycles, and to enhance weather and forecast capabilities. NASA's Jet Propulsion Laboratory has been selected as the lead center for the development and operation of SMAP. The Jet Propulsion Laboratory (JPL) has an extensive history of successful deep space exploration. JPL missions have typically been large scale Class A missions with significant budget and staffing. SMAP represents a new area of JPL focus towards low cost Earth science missions. Success in this new area requires changes to the way that JPL has traditionally provided the Mission Operations System (MOS)/Ground Data System (GDS) functions. The operation of SMAP requires more routine operations activities and support for higher data rates and data volumes than have been achieved in the past. These activities must be addressed by a reduced operations team and support staff. To meet this challenge, the SMAP ground data system provides automation that will perform unattended operations, including automated commanding of the SMAP spacecraft.

  15. Comparison of Left Atrial Bipolar Voltage and Scar Using Multielectrode Fast Automated Mapping versus Point-by-Point Contact Electroanatomic Mapping in Patients With Atrial Fibrillation Undergoing Repeat Ablation.

    PubMed

    Liang, Jackson J; Elafros, Melissa A; Muser, Daniele; Pathak, Rajeev K; Santangeli, Pasquale; Supple, Gregory E; Schaller, Robert D; Frankel, David S; Dixit, Sanjay

    2017-03-01

    Bipolar voltage criteria to delineate left atrial (LA) scar have been derived using point-by-point (PBP) contact electroanatomical mapping. It remains unclear how PBP-derived LA scar correlates with multielectrode fast automated mapping (ME-FAM) derived scar. We aimed to correlate scar and bipolar voltages from LA maps created using PBP versus ME-FAM. In consecutive patients undergoing repeat AF ablation, 2 separate LA maps were created using PBP and ME-FAM during sinus rhythm before ablation. Contiguous areas in the LA with a bipolar voltage cutoff of ≤0.2 mV represented dense scar; LA scar percentage was calculated for each map. Each LA shell was divided into 9 regions and each region further subdivided into 4 quadrants for additional analysis; mean voltages of all points obtained using PBP versus ME-FAM in each region were compared. Forty maps (20 PBP: mean 228.5 ± 95.6 points; 20 ME-FAM: 923.0 ± 382.6 points) were created in 20 patients. Mapping time with ME-FAM was shorter compared with PBP (13.3 ± 5.3 vs. 34.4 ± 13.1 minutes; P < 0.001). Mean LA scar percentage was higher with PBP compared with ME-FAM (15.5 ± 17.1% vs. 12.8 ± 17.6%; P = 0.04). Mean PBP voltage distribution was lower (compared with ME-FAM) in the septum (0.95 ± 0.73 vs. 1.46 ± 0.99 mV; P = 0.009), posterior wall (0.84 ± 0.42 vs. 1.40 ± 0.83 mV; P = 0.0008), roof (0.78 ± 0.80 vs. 1.39 ± 1.09 mV; P = 0.0003), and right PV-LA junction (0.34 ± 0.25 vs. 0.59 ± 0.50 mV; P = 0.01) regions, while voltages were similar in all other LA regions (all P > 0.05). In AF patients undergoing repeat ablation, bipolar voltage is greater in certain LA segments with ME-FAM compared with PBP mapping. © 2016 Wiley Periodicals, Inc.

  16. Interoperability of Medication Classification Systems: Lessons Learned Mapping Established Pharmacologic Classes (EPCs) to SNOMED CT

    PubMed Central

    Nelson, Scott D; Parker, Jaqui; Lario, Robert; Winnenburg, Rainer; Erlbaum, Mark S.; Lincoln, Michael J.; Bodenreider, Olivier

    2018-01-01

    Interoperability among medication classification systems is known to be limited. We investigated the mapping of the Established Pharmacologic Classes (EPCs) to SNOMED CT. We compared lexical and instance-based methods to an expert-reviewed reference standard to evaluate contributions of these methods. Of the 543 EPCs, 284 had an equivalent SNOMED CT class, 205 were more specific, and 54 could not be mapped. Precision, recall, and F1 score were 0.416, 0.620, and 0.498 for lexical mapping and 0.616, 0.504, and 0.554 for instance-based mapping. Each automatic method has strengths, weaknesses, and unique contributions in mapping between medication classification systems. In our experience, it was beneficial to consider the mapping provided by both automated methods for identifying potential matches, gaps, inconsistencies, and opportunities for quality improvement between classifications. However, manual review by subject matter experts is still needed to select the most relevant mappings. PMID:29295234

  17. Automated multiplex genome-scale engineering in yeast

    PubMed Central

    Si, Tong; Chao, Ran; Min, Yuhao; Wu, Yuying; Ren, Wen; Zhao, Huimin

    2017-01-01

    Genome-scale engineering is indispensable in understanding and engineering microorganisms, but the current tools are mainly limited to bacterial systems. Here we report an automated platform for multiplex genome-scale engineering in Saccharomyces cerevisiae, an important eukaryotic model and widely used microbial cell factory. Standardized genetic parts encoding overexpression and knockdown mutations of >90% yeast genes are created in a single step from a full-length cDNA library. With the aid of CRISPR-Cas, these genetic parts are iteratively integrated into the repetitive genomic sequences in a modular manner using robotic automation. This system allows functional mapping and multiplex optimization on a genome scale for diverse phenotypes including cellulase expression, isobutanol production, glycerol utilization and acetic acid tolerance, and may greatly accelerate future genome-scale engineering endeavours in yeast. PMID:28469255

  18. Automated volumetric segmentation of retinal fluid on optical coherence tomography

    PubMed Central

    Wang, Jie; Zhang, Miao; Pechauer, Alex D.; Liu, Liang; Hwang, Thomas S.; Wilson, David J.; Li, Dengwang; Jia, Yali

    2016-01-01

    We propose a novel automated volumetric segmentation method to detect and quantify retinal fluid on optical coherence tomography (OCT). The fuzzy level set method was introduced for identifying the boundaries of fluid filled regions on B-scans (x and y-axes) and C-scans (z-axis). The boundaries identified from three types of scans were combined to generate a comprehensive volumetric segmentation of retinal fluid. Then, artefactual fluid regions were removed using morphological characteristics and by identifying vascular shadowing with OCT angiography obtained from the same scan. The accuracy of retinal fluid detection and quantification was evaluated on 10 eyes with diabetic macular edema. Automated segmentation had good agreement with manual segmentation qualitatively and quantitatively. The fluid map can be integrated with OCT angiogram for intuitive clinical evaluation. PMID:27446676

  19. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  20. Advanced Earth Observation System Instrumentation Study (AEOSIS)

    NASA Technical Reports Server (NTRS)

    Var, R. E.

    1976-01-01

    The feasibility, practicality, and cost are investigated for establishing a national system or grid of artificial landmarks suitable for automated (near real time) recognition in the multispectral scanner imagery data from an earth observation satellite (EOS). The intended use of such landmarks, for orbit determination and improved mapping accuracy is reviewed. The desirability of using xenon searchlight landmarks for this purpose is explored theoretically and by means of experimental results obtained with LANDSAT 1 and LANDSAT 2. These results are used, in conjunction with the demonstrated efficiency of an automated detection scheme, to determine the size and cost of a xenon searchlight that would be suitable for an EOS Searchlight Landmark Station (SLS), and to facilitate the development of a conceptual design for an automated and environmentally protected EOS SLS.

  1. Complex Genetics of Behavior: BXDs in the Automated Home-Cage.

    PubMed

    Loos, Maarten; Verhage, Matthijs; Spijker, Sabine; Smit, August B

    2017-01-01

    This chapter describes a use case for the genetic dissection and automated analysis of complex behavioral traits using the genetically diverse panel of BXD mouse recombinant inbred strains. Strains of the BXD resource differ widely in terms of gene and protein expression in the brain, as well as in their behavioral repertoire. A large mouse resource opens the possibility for gene finding studies underlying distinct behavioral phenotypes, however, such a resource poses a challenge in behavioral phenotyping. To address the specifics of large-scale screening we describe how to investigate: (1) how to assess mouse behavior systematically in addressing a large genetic cohort, (2) how to dissect automation-derived longitudinal mouse behavior into quantitative parameters, and (3) how to map these quantitative traits to the genome, deriving loci underlying aspects of behavior.

  2. Wafer-level radiometric performance testing of uncooled microbolometer arrays

    NASA Astrophysics Data System (ADS)

    Dufour, Denis G.; Topart, Patrice; Tremblay, Bruno; Julien, Christian; Martin, Louis; Vachon, Carl

    2014-03-01

    A turn-key semi-automated test system was constructed to perform on-wafer testing of microbolometer arrays. The system allows for testing of several performance characteristics of ROIC-fabricated microbolometer arrays including NETD, SiTF, ROIC functionality, noise and matrix operability, both before and after microbolometer fabrication. The system accepts wafers up to 8 inches in diameter and performs automated wafer die mapping using a microscope camera. Once wafer mapping is completed, a custom-designed quick insertion 8-12 μm AR-coated Germanium viewport is placed and the chamber is pumped down to below 10-5 Torr, allowing for the evaluation of package-level focal plane array (FPA) performance. The probe card is electrically connected to an INO IRXCAM camera core, a versatile system that can be adapted to many types of ROICs using custom-built interface printed circuit boards (PCBs). We currently have the capability for testing 384x288, 35 μm pixel size and 160x120, 52 μm pixel size FPAs. For accurate NETD measurements, the system is designed to provide an F/1 view of two rail-mounted blackbodies seen through the Germanium window by the die under test. A master control computer automates the alignment of the probe card to the dies, the positioning of the blackbodies, FPA image frame acquisition using IRXCAM, as well as data analysis and storage. Radiometric measurement precision has been validated by packaging dies measured by the automated probing system and re-measuring the SiTF and Noise using INO's pre-existing benchtop system.

  3. Integrated approach using multi-platform sensors for enhanced high-resolution daily ice cover product

    NASA Astrophysics Data System (ADS)

    Bonev, George; Gladkova, Irina; Grossberg, Michael; Romanov, Peter; Helfrich, Sean

    2016-09-01

    The ultimate objective of this work is to improve characterization of the ice cover distribution in the polar areas, to improve sea ice mapping and to develop a new automated real-time high spatial resolution multi-sensor ice extent and ice edge product for use in operational applications. Despite a large number of currently available automated satellite-based sea ice extent datasets, analysts at the National Ice Center tend to rely on original satellite imagery (provided by satellite optical, passive microwave and active microwave sensors) mainly because the automated products derived from satellite optical data have gaps in the area coverage due to clouds and darkness, passive microwave products have poor spatial resolution, automated ice identifications based on radar data are not quite reliable due to a considerable difficulty in discriminating between the ice cover and rough ice-free ocean surface due to winds. We have developed a multisensor algorithm that first extracts maximum information on the sea ice cover from imaging instruments VIIRS and MODIS, including regions covered by thin, semitransparent clouds, then supplements the output by the microwave measurements and finally aggregates the results into a cloud gap free daily product. This ability to identify ice cover underneath thin clouds, which is usually masked out by traditional cloud detection algorithms, allows for expansion of the effective coverage of the sea ice maps and thus more accurate and detailed delineation of the ice edge. We have also developed a web-based monitoring system that allows comparison of our daily ice extent product with the several other independent operational daily products.

  4. A conceptual model of the automated credibility assessment of the volunteered geographic information

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.

    2014-02-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.

  5. Automated estimation of image quality for coronary computed tomographic angiography using machine learning.

    PubMed

    Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J

    2018-03-23

    Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.

  6. Human-human reliance in the context of automation.

    PubMed

    Lyons, Joseph B; Stokes, Charlene K

    2012-02-01

    The current study examined human-human reliance during a computer-based scenario where participants interacted with a human aid and an automated tool simultaneously. Reliance on others is complex, and few studies have examined human-human reliance in the context of automation. Past research found that humans are biased in their perceived utility of automated tools such that they view them as more accurate than humans. Prior reviews have postulated differences in human-human versus human-machine reliance, yet few studies have examined such reliance when individuals are presented with divergent information from different sources. Participants (N = 40) engaged in the Convoy Leader experiment.They selected a convoy route based on explicit guidance from a human aid and information from an automated map. Subjective and behavioral human-human reliance indices were assessed. Perceptions of risk were manipulated by creating three scenarios (low, moderate, and high) that varied in the amount of vulnerability (i.e., potential for attack) associated with the convoy routes. Results indicated that participants reduced their behavioral reliance on the human aid when faced with higher risk decisions (suggesting increased reliance on the automation); however, there were no reported differences in intentions to rely on the human aid relative to the automation. The current study demonstrated that when individuals are provided information from both a human aid and automation,their reliance on the human aid decreased during high-risk decisions. This study adds to a growing understanding of the biases and preferences that exist during complex human-human and human-machine interactions.

  7. Challenges and complications in neighborhood mapping: from neighborhood concept to operationalization

    NASA Astrophysics Data System (ADS)

    Deng, Yongxin

    2016-07-01

    This paper examines complications in neighborhood mapping and corresponding challenges for the GIS community, taking both a conceptual and a methodological perspective. It focuses on the social and spatial dimensions of the neighborhood concept and highlights their relationship in neighborhood mapping. Following a brief summary of neighborhood definitions, five interwoven factors are identified to be origins of neighborhood mapping difficulties: conceptual vagueness, uncertainty of various sources, GIS representation, scale, and neighborhood homogeneity or continuity. Existing neighborhood mapping methods are grouped into six categories to be assessed: perception based, physically based, inference based, preexisting, aggregated, and automated. Mapping practices in various neighborhood-related disciplines and applications are cited as examples to demonstrate how the methods work, as well as how they should be evaluated. A few mapping strategies for the improvement of neighborhood mapping are prescribed from a GIS perspective: documenting simplifications employed in the mapping procedure, addressing uncertainty sources, developing new data solutions, and integrating complementary mapping methods. Incorporation of high-resolution data and introduction of more GIS ideas and methods (such as fuzzy logic) are identified to be future opportunities.

  8. Revision of Primary Series Maps

    USGS Publications Warehouse

    ,

    2000-01-01

    In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.

  9. The retention of manual flying skills in the automated cockpit.

    PubMed

    Casner, Stephen M; Geven, Richard W; Recker, Matthias P; Schooler, Jonathan W

    2014-12-01

    The aim of this study was to understand how the prolonged use of cockpit automation is affecting pilots' manual flying skills. There is an ongoing concern about a potential deterioration of manual flying skills among pilots who assume a supervisory role while cockpit automation systems carry out tasks that were once performed by human pilots. We asked 16 airline pilots to fly routine and nonroutine flight scenarios in a Boeing 747-400 simulator while we systematically varied the level of automation that they used, graded their performance, and probed them about what they were thinking about as they flew. We found pilots' instrument scanning and manual control skills to be mostly intact, even when pilots reported that they were infrequently practiced. However, when pilots were asked to manually perform the cognitive tasks needed for manual flight (e.g., tracking the aircraft's position without the use of a map display, deciding which navigational steps come next, recognizing instrument system failures), we observed more frequent and significant problems. Furthermore, performance on these cognitive tasks was associated with measures of how often pilots engaged in task-unrelated thought when cockpit automation was used. We found that while pilots' instrument scanning and aircraft control skills are reasonably well retained when automation is used, the retention of cognitive skills needed for manual flying may depend on the degree to which pilots remain actively engaged in supervising the automation.

  10. Harvester-based sensing system for cotton fiber-quality mapping

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture in cotton production attempts to maximize profitability by exploiting information on field spatial variability to optimize the fiber yield and quality. For precision agriculture to be economically viable, collection of spatial variability data within a field must be automated a...

  11. Human Factors Research in Aircrew Performance and Training: 1986-1991

    DTIC Science & Technology

    1992-07-01

    significant loss of realism . Seventeen functions could not be performed at all, primarily because of three missing system features: an automated target...traditionally trained aviators. In 1979, Anacapa developed 13 cinematic exercises to provide supplemental training in map interpretation and terrain

  12. Automated and accurate bridge deck crack inspection and mapping.

    DOT National Transportation Integrated Search

    2012-10-01

    One of the important tasks for bridge maintenance is bridge deck crack inspection. Traditionally, a human inspector detects cracks using his/her eyes and finds the location of cracks manually. Thus the accuracy of the inspection result is low due to ...

  13. Automated mapping of pharmacy orders from two electronic health record systems to RxNorm within the STRIDE clinical data warehouse.

    PubMed

    Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry

    2009-11-14

    The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.

  14. Automated delineation and characterization of watersheds for more than 3,000 surface-water-quality monitoring stations active in 2010 in Texas

    USGS Publications Warehouse

    Archuleta, Christy-Ann M.; Gonzales, Sophia L.; Maltby, David R.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Texas Commission on Environmental Quality, developed computer scripts and applications to automate the delineation of watershed boundaries and compute watershed characteristics for more than 3,000 surface-water-quality monitoring stations in Texas that were active during 2010. Microsoft Visual Basic applications were developed using ArcGIS ArcObjects to format the source input data required to delineate watershed boundaries. Several automated scripts and tools were developed or used to calculate watershed characteristics using Python, Microsoft Visual Basic, and the RivEX tool. Automated methods were augmented by the use of manual methods, including those done using ArcMap software. Watershed boundaries delineated for the monitoring stations are limited to the extent of the Subbasin boundaries in the USGS Watershed Boundary Dataset, which may not include the total watershed boundary from the monitoring station to the headwaters.

  15. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  16. GlobCorine- A Joint EEA-ESA Project for Operational Land Cover and Land Use Mapping at Pan-European Scale

    NASA Astrophysics Data System (ADS)

    Bontemps, S.; Defourny, P.; Van Bogaert, E.; Weber, J. L.; Arino, O.

    2010-12-01

    Regular and global land cover mapping contributes to evaluating the impact of human activities on the environment. Jointly supported by the European Space Agency and the European Environmental Agency, the GlobCorine project builds on the GlobCover findings and aims at making the full use of the MERIS time series for frequent land cover monitoring. The GlobCover automated classification approach has been tuned to the pan-European continent and adjusted towards a classification compatible with the Corine typology. The GlobCorine 2005 land cover map has been achieved, validated and made available to a broad- level stakeholder community from the ESA website. A first version of the GlobCorine 2009 map has also been produced, demonstrating the possibility for an operational production of frequent and updated global land cover maps.

  17. US Topo Maps 2014: Program updates and research

    USGS Publications Warehouse

    Fishburn, Kristin A.

    2014-01-01

    The U. S. Geological Survey (USGS) US Topo map program is now in year two of its second three-year update cycle. Since the program was launched in 2009, the product and the production system tools and processes have undergone enhancements that have made the US Topo maps a popular success story. Research and development continues with structural and content product enhancements, streamlined and more fully automated workflows, and the evaluation of a GIS-friendly US Topo GIS Packet. In addition, change detection methodologies are under evaluation to further streamline product maintenance and minimize resource expenditures for production in the future. The US Topo map program will continue to evolve in the years to come, providing traditional map users and Geographic Information System (GIS) analysts alike with a convenient, freely available product incorporating nationally consistent data that are quality assured to high standards.

  18. High-Dimensional Modeling for Cytometry: Building Rock Solid Models Using GemStone™ and Verity Cen-se'™ High-Definition t-SNE Mapping.

    PubMed

    Bruce Bagwell, C

    2018-01-01

    This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.

  19. An open-source java platform for automated reaction mapping.

    PubMed

    Crabtree, John D; Mehta, Dinesh P; Kouri, Tina M

    2010-09-27

    This article presents software applications that have been built upon a modular, open-source, reaction mapping library that can be used in both cheminformatics and bioinformatics research. We first describe the theoretical underpinnings and modular architecture of the core software library. We then describe two applications that have been built upon that core. The first is a generic reaction viewer and mapper, and the second classifies reactions according to rules that can be modified by end users with little or no programming skills.

  20. Object-based landslide mapping on satellite images from different sensors

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be transferred to the different satellite images. The presence of bare ground was assumed to be an evidence for the occurrence of landslides. For separating vegetated from non-vegetated areas the Normalized Difference Vegetation Index (NDVI) was primarily used. Each image was divided into two respective parts based on an automatically calculated NDVI threshold value in eCognition (Trimble) software by combining the homogeneity criterion of multiresolution segmentation and histogram-based methods, so that heterogeneity is increased to a maximum. Expert knowledge models, which depict features and thresholds that are usually used by experts for digital landslide mapping, were considered for refining the classification. The results were compared to the respective results from visual image interpretation (i.e. manually digitized reference polygons for each image), which were produced by an independent local expert. By that, the spatial overlaps as well as under- and over-estimated areas were identified and the performance of the approach in relation to each sensor was evaluated. The presented method can complement traditional manual mapping efforts. Moreover, it contributes to current developments for increasing the transferability of semi-automated OBIA approaches and for improving the efficiency of change detection approaches across multi-sensor imagery.

  1. Ultramap: the all in One Photogrammetric Solution

    NASA Astrophysics Data System (ADS)

    Wiechert, A.; Gruber, M.; Karner, K.

    2012-07-01

    This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.

  2. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  3. Computer-generated mineral commodity deposit maps

    USGS Publications Warehouse

    Schruben, Paul G.; Hanley, J. Thomas

    1983-01-01

    This report describes an automated method of generating deposit maps of mineral commodity information. In addition, it serves as a user's manual for the authors' mapping system. Procedures were developed which allow commodity specialists to enter deposit information, retrieve selected data, and plot deposit symbols in any geographic area within the conterminous United States. The mapping system uses both micro- and mainframe computers. The microcomputer is used to input and retrieve information, thus minimizing computing charges. The mainframe computer is used to generate map plots which are printed by a Calcomp plotter. Selector V data base system is employed for input and retrieval on the microcomputer. A general mapping program (Genmap) was written in FORTRAN for use on the mainframe computer. Genmap can plot fifteen symbol types (for point locations) in three sizes. The user can assign symbol types to data items interactively. Individual map symbols can be labeled with a number or the deposit name. Genmap also provides several geographic boundary file and window options.

  4. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  5. Words, concepts, or both: optimal indexing units for automated information retrieval.

    PubMed Central

    Hersh, W. R.; Hickam, D. H.; Leone, T. J.

    1992-01-01

    What is the best way to represent the content of documents in an information retrieval system? This study compares the retrieval effectiveness of five different methods for automated (machine-assigned) indexing using three test collections. The consistently best methods are those that use indexing based on the words that occur in the available text of each document. Methods used to map text into concepts from a controlled vocabulary showed no advantage over the word-based methods. This study also looked at an approach to relevance feedback which showed benefit for both word-based and concept-based methods. PMID:1482951

  6. LANDSAT demonstration/application and GIS integration in south central Alaska

    NASA Technical Reports Server (NTRS)

    Burns, A. W.; Derrenbacher, W.

    1981-01-01

    Automated geographic information systems were developed for two sites in Southcentral Alaska to serve as tests for both the process of integrating classified LANDSAT data into a comprehensive environmental data base and the process of using automated information in land capability/suitability analysis and environmental planning. The Big Lake test site, located approximately 20 miles north of the City of Anchorage, comprises an area of approximately 150 square miles. The Anchorage Hillside test site, lying approximately 5 miles southeast of the central part of the city, extends over an area of some 25 square miles. Map construction and content is described.

  7. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    NASA Technical Reports Server (NTRS)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  8. Improvement of the banana "Musa acuminata" reference sequence using NGS data and semi-automated bioinformatics methods.

    PubMed

    Martin, Guillaume; Baurens, Franc-Christophe; Droc, Gaëtan; Rouard, Mathieu; Cenci, Alberto; Kilian, Andrzej; Hastie, Alex; Doležel, Jaroslav; Aury, Jean-Marc; Alberti, Adriana; Carreel, Françoise; D'Hont, Angélique

    2016-03-16

    Recent advances in genomics indicate functional significance of a majority of genome sequences and their long range interactions. As a detailed examination of genome organization and function requires very high quality genome sequence, the objective of this study was to improve reference genome assembly of banana (Musa acuminata). We have developed a modular bioinformatics pipeline to improve genome sequence assemblies, which can handle various types of data. The pipeline comprises several semi-automated tools. However, unlike classical automated tools that are based on global parameters, the semi-automated tools proposed an expert mode for a user who can decide on suggested improvements through local compromises. The pipeline was used to improve the draft genome sequence of Musa acuminata. Genotyping by sequencing (GBS) of a segregating population and paired-end sequencing were used to detect and correct scaffold misassemblies. Long insert size paired-end reads identified scaffold junctions and fusions missed by automated assembly methods. GBS markers were used to anchor scaffolds to pseudo-molecules with a new bioinformatics approach that avoids the tedious step of marker ordering during genetic map construction. Furthermore, a genome map was constructed and used to assemble scaffolds into super scaffolds. Finally, a consensus gene annotation was projected on the new assembly from two pre-existing annotations. This approach reduced the total Musa scaffold number from 7513 to 1532 (i.e. by 80%), with an N50 that increased from 1.3 Mb (65 scaffolds) to 3.0 Mb (26 scaffolds). 89.5% of the assembly was anchored to the 11 Musa chromosomes compared to the previous 70%. Unknown sites (N) were reduced from 17.3 to 10.0%. The release of the Musa acuminata reference genome version 2 provides a platform for detailed analysis of banana genome variation, function and evolution. Bioinformatics tools developed in this work can be used to improve genome sequence assemblies in other species.

  9. Hyperspectral image analysis for water stress detection of apple trees

    USDA-ARS?s Scientific Manuscript database

    Plant stress significantly reduces plant productivity. Automated on-the-go mapping of plant stress would allow for a timely intervention and mitigation of the problem before critical thresholds are exceeded, thereby maximizing productivity. The spectral signature of plant leaves was analyzed by a ...

  10. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  11. Imaging ATUM ultrathin section libraries with WaferMapper: a multi-scale approach to EM reconstruction of neural circuits

    PubMed Central

    Hayworth, Kenneth J.; Morgan, Josh L.; Schalek, Richard; Berger, Daniel R.; Hildebrand, David G. C.; Lichtman, Jeff W.

    2014-01-01

    The automated tape-collecting ultramicrotome (ATUM) makes it possible to collect large numbers of ultrathin sections quickly—the equivalent of a petabyte of high resolution images each day. However, even high throughput image acquisition strategies generate images far more slowly (at present ~1 terabyte per day). We therefore developed WaferMapper, a software package that takes a multi-resolution approach to mapping and imaging select regions within a library of ultrathin sections. This automated method selects and directs imaging of corresponding regions within each section of an ultrathin section library (UTSL) that may contain many thousands of sections. Using WaferMapper, it is possible to map thousands of tissue sections at low resolution and target multiple points of interest for high resolution imaging based on anatomical landmarks. The program can also be used to expand previously imaged regions, acquire data under different imaging conditions, or re-image after additional tissue treatments. PMID:25018701

  12. City model enrichment

    NASA Astrophysics Data System (ADS)

    Smart, Philip D.; Quinn, Jonathan A.; Jones, Christopher B.

    The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods.

  13. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine

    PubMed Central

    Peterson, Kevin J.; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459

  14. Review of edgematchimg procedures for digital cartographic data used in Geographic Information Systems (GIS)

    USGS Publications Warehouse

    Nebert, D.D.

    1989-01-01

    In the process of developing a continuous hydrographic data layer for water resources applications in the Pacific Northwest, map-edge discontinuities in the U.S. Geological Survey 1:100 ,000-scale digital data that required application of computer-assisted edgematching procedures were identified. The spatial data sets required by the project must have line features that match closely enough across map boundaries to ensure full line topology when adjacent files are joined by the computer. Automated edgematching techniques are evaluated as to their effects on positional accuracy. Interactive methods such as selective node-matching and on-screen editing are also reviewed. Interactive procedures complement automated methods by allowing supervision of edgematching in a cartographic and hydrologic context. Common edge conditions encountered in the preparation of the Northwest Rivers data base are described, as are recommended processing solutions. Suggested edgematching procedures for 1:100,000-scale hydrography data are included in an appendix to encourage consistent processing of this theme on a national scale. (USGS)

  15. Mapping social behavior-induced brain activation at cellular resolution in the mouse

    PubMed Central

    Kim, Yongsoo; Venkataraju, Kannan Umadevi; Pradhan, Kith; Mende, Carolin; Taranda, Julian; Turaga, Srinivas C.; Arganda-Carreras, Ignacio; Ng, Lydia; Hawrylycz, Michael J.; Rockland, Kathleen; Seung, H. Sebastian; Osten, Pavel

    2014-01-01

    Understanding how brain activation mediates behaviors is a central goal of systems neuroscience. Here we apply an automated method for mapping brain activation in the mouse in order to probe how sex-specific social behaviors are represented in the male brain. Our method uses the immediate early gene c-fos, a marker of neuronal activation, visualized by serial two-photon tomography: the c-fos-GFP-positive neurons are computationally detected, their distribution is registered to a reference brain and a brain atlas, and their numbers are analyzed by statistical tests. Our results reveal distinct and shared female and male interaction-evoked patterns of male brain activation representing sex discrimination and social recognition. We also identify brain regions whose degree of activity correlates to specific features of social behaviors and estimate the total numbers and the densities of activated neurons per brain areas. Our study opens the door to automated screening of behavior-evoked brain activation in the mouse. PMID:25558063

  16. Application of DEN refinement and automated model building to a difficult case of molecular-replacement phasing: the structure of a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum.

    PubMed

    Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F

    2012-04-01

    Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.

  17. Application of DEN refinement and automated model building to a difficult case of molecular-replacement phasing: the structure of a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum

    PubMed Central

    Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.

    2012-01-01

    Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259

  18. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  19. A complete solution of cartographic displacement based on elastic beams model and Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Guo, Q.; Sun, Y.

    2014-04-01

    In map production and generalization, it is inevitable to arise some spatial conflicts, but the detection and resolution of these spatial conflicts still requires manual operation. It is become a bottleneck hindering the development of automated cartographic generalization. Displacement is the most useful contextual operator that is often used for resolving the conflicts arising between two or more map objects. Automated generalization researches have reported many approaches of displacement including sequential approaches and optimization approaches. As an excellent optimization approach on the basis of energy minimization principles, elastic beams model has been used in resolving displacement problem of roads and buildings for several times. However, to realize a complete displacement solution, techniques of conflict detection and spatial context analysis should be also take into consideration. So we proposed a complete solution of displacement based on the combined use of elastic beams model and constrained Delaunay triangulation (CDT) in this paper. The solution designed as a cyclic and iterative process containing two phases: detection phase and displacement phase. In detection phase, CDT of map is use to detect proximity conflicts, identify spatial relationships and structures, and construct auxiliary structure, so as to support the displacement phase on the basis of elastic beams. In addition, for the improvements of displacement algorithm, a method for adaptive parameters setting and a new iterative strategy are put forward. Finally, we implemented our solution on a testing map generalization platform, and successfully tested it against 2 hand-generated test datasets of roads and buildings respectively.

  20. A multifaceted comparison of ArcGIS and MapMarker for automated geocoding.

    PubMed

    Kumar, Sanjaya; Liu, Ming; Hwang, Syni-An

    2012-11-01

    Geocoding is increasingly being used for public health surveillance and spatial epidemiology studies. Public health departments in the United States of America (USA) often use this approach to investigate disease outbreaks and clusters or assign health records to appropriate geographic units. We evaluated two commonly used geocoding software packages, ArcGIS and MapMarker, for automated geocoding of a large number of residential addresses from health administrative data in New York State, USA to better understand their features, performance and limitations. The comparison was based on three metrics of evaluation: completeness (or match rate), geocode similarity and positional accuracy. Of the 551,798 input addresses, 318,302 (57.7%) were geocoded by MapMarker and 420,813 (76.3%) by the ArcGIS composite address locator. High similarity between the geocodes assigned by the two methods was found, especially in suburban and urban areas. Among addresses with a distance of greater than 100 m between the geocodes assigned by the two packages, the point assigned by ArcGIS was closer to the associated parcel centroid ("true" location) compared with that assigned by MapMarker. In addition, the composite address locator in ArcGIS allows users to fully utilise available reference data, which consequently results in better geocoding results. However, the positional differences found were minimal, and a large majority of addresses were placed on the same locations by both geocoding packages. Using both methods and combining the results can maximise match rates and save the time needed for manual geocoding.

  1. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  2. Application of the automated spatial surveillance program to birth defects surveillance data.

    PubMed

    Gardner, Bennett R; Strickland, Matthew J; Correa, Adolfo

    2007-07-01

    Although many birth defects surveillance programs incorporate georeferenced records into their databases, practical methods for routine spatial surveillance are lacking. We present a macroprogram written for the software package R designed for routine exploratory spatial analysis of birth defects data, the Automated Spatial Surveillance Program (ASSP), and present an application of this program using spina bifida prevalence data for metropolitan Atlanta. Birth defects surveillance data were collected by the Metropolitan Atlanta Congenital Defects Program. We generated ASSP maps for two groups of years that correspond roughly to the periods before (1994-1998) and after (1999-2002) folic acid fortification of flour. ASSP maps display census tract-specific spina bifida prevalence, smoothed prevalence contours, and locations of statistically elevated prevalence. We used these maps to identify areas of elevated prevalence for spina bifida. We identified a large area of potential concern in the years following fortification of grains and cereals with folic acid. This area overlapped census tracts containing large numbers of Hispanic residents. The potential utility of ASSP for spatial disease monitoring was demonstrated by the identification of areas of high prevalence of spina bifida and may warrant further study and monitoring. We intend to further develop ASSP so that it becomes practical for routine spatial monitoring of birth defects. (c) 2007 Wiley-Liss, Inc.

  3. Deriving pathway maps from automated text analysis using a grammar-based approach.

    PubMed

    Olsson, Björn; Gawronska, Barbara; Erlendsson, Björn

    2006-04-01

    We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.

  4. Automated mineralogy based on micro-energy-dispersive X-ray fluorescence microscopy (µ-EDXRF) applied to plutonic rock thin sections in comparison to a mineral liberation analyzer

    NASA Astrophysics Data System (ADS)

    Nikonow, Wilhelm; Rammlmair, Dieter

    2017-10-01

    Recent developments in the application of micro-energy-dispersive X-ray fluorescence spectrometry mapping (µ-EDXRF) have opened up new opportunities for fast geoscientific analyses. Acquiring spatially resolved spectral and chemical information non-destructively for large samples of up to 20 cm length provides valuable information for geoscientific interpretation. Using supervised classification of the spectral information, mineral distribution maps can be obtained. In this work, thin sections of plutonic rocks are analyzed by µ-EDXRF and classified using the supervised classification algorithm spectral angle mapper (SAM). Based on the mineral distribution maps, it is possible to obtain quantitative mineral information, i.e., to calculate the modal mineralogy, search and locate minerals of interest, and perform image analysis. The results are compared to automated mineralogy obtained from the mineral liberation analyzer (MLA) of a scanning electron microscope (SEM) and show good accordance, revealing variation resulting mostly from the limit of spatial resolution of the µ-EDXRF instrument. Taking into account the little time needed for sample preparation and measurement, this method seems suitable for fast sample overviews with valuable chemical, mineralogical and textural information. Additionally, it enables the researcher to make better and more targeted decisions for subsequent analyses.

  5. Automated mapping of mineral groups and green vegetation from Landsat Thematic Mapper imagery with an example from the San Juan Mountains, Colorado

    USGS Publications Warehouse

    Rockwell, Barnaby W.

    2013-01-01

    Multispectral satellite data acquired by the ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) and Landsat 7 Enhanced Thematic Mapper Plus (TM) sensors are being used to populate an online Geographic Information System (GIS) of the spatial occurrence of mineral groups and green vegetation across the western conterminous United States and Alaska. These geospatial data are supporting U.S. Geological Survey national-scale mineral deposit database development and other mineral resource and geoenvironmental research as a means of characterizing mineral exposures related to mined and unmined hydrothermally altered rocks and mine waste. This report introduces a new methodology for the automated analysis of Landsat TM data that has been applied to more than 180 scenes covering the western United States. A map of mineral groups and green vegetation produced using this new methodology that covers the western San Juan Mountains, Colorado, and the Four Corners Region is presented. The map is provided as a layered GeoPDF and in GIS-ready digital format. TM data analysis results from other well-studied and mineralogically characterized areas with strong hydrothermal alteration and (or) supergene weathering of near-surface sulfide minerals are also shown and compared with results derived from ASTER data analysis.

  6. SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.

    PubMed

    Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao

    2014-08-08

    Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.

  7. Systems Operation Studies for Automated Guideway Transit Systems: Feeder Systems Model Functional Specification

    DOT National Transportation Integrated Search

    1981-01-01

    This document specifies the functional requirements for the AGT-SOS Feeder Systems Model (FSM), the type of hardware required, and the modeling techniques employed by the FSM. The objective of the FSM is to map the zone-to-zone transit patronage dema...

  8. NOAA Weather Radio - SAME

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME FIPS (Federal Information Processing Standards) code changes and / or SAME location code changes

  9. LAND COVER MAPPING IN AN AGRICULTURAL SETTING USING MULTISEASONAL THEMATIC MAPPER DATA

    EPA Science Inventory

    A multiseasonal Landsat Thematic Mapper (TM) data set consisting of five image dates from a single year was used to characterize agricultural and related land cover in the Willamette River Basin (WRB) of western Oregon. Image registration was accomplished using an automated grou...

  10. Gbm.auto: A software tool to simplify spatial modelling and Marine Protected Area planning

    PubMed Central

    Officer, Rick; Clarke, Maurice; Reid, David G.; Brophy, Deirdre

    2017-01-01

    Boosted Regression Trees. Excellent for data-poor spatial management but hard to use Marine resource managers and scientists often advocate spatial approaches to manage data-poor species. Existing spatial prediction and management techniques are either insufficiently robust, struggle with sparse input data, or make suboptimal use of multiple explanatory variables. Boosted Regression Trees feature excellent performance and are well suited to modelling the distribution of data-limited species, but are extremely complicated and time-consuming to learn and use, hindering access for a wide potential user base and therefore limiting uptake and usage. BRTs automated and simplified for accessible general use with rich feature set We have built a software suite in R which integrates pre-existing functions with new tailor-made functions to automate the processing and predictive mapping of species abundance data: by automating and greatly simplifying Boosted Regression Tree spatial modelling, the gbm.auto R package suite makes this powerful statistical modelling technique more accessible to potential users in the ecological and modelling communities. The package and its documentation allow the user to generate maps of predicted abundance, visualise the representativeness of those abundance maps and to plot the relative influence of explanatory variables and their relationship to the response variables. Databases of the processed model objects and a report explaining all the steps taken within the model are also generated. The package includes a previously unavailable Decision Support Tool which combines estimated escapement biomass (the percentage of an exploited population which must be retained each year to conserve it) with the predicted abundance maps to generate maps showing the location and size of habitat that should be protected to conserve the target stocks (candidate MPAs), based on stakeholder priorities, such as the minimisation of fishing effort displacement. Gbm.auto for management in various settings By bridging the gap between advanced statistical methods for species distribution modelling and conservation science, management and policy, these tools can allow improved spatial abundance predictions, and therefore better management, decision-making, and conservation. Although this package was built to support spatial management of a data-limited marine elasmobranch fishery, it should be equally applicable to spatial abundance modelling, area protection, and stakeholder engagement in various scenarios. PMID:29216310

  11. Developing a semi/automated protocol to post-process large volume, High-resolution airborne thermal infrared (TIR) imagery for urban waste heat mapping

    NASA Astrophysics Data System (ADS)

    Rahman, Mir Mustafizur

    In collaboration with The City of Calgary 2011 Sustainability Direction and as part of the HEAT (Heat Energy Assessment Technologies) project, the focus of this research is to develop a semi/automated 'protocol' to post-process large volumes of high-resolution (H-res) airborne thermal infrared (TIR) imagery to enable accurate urban waste heat mapping. HEAT is a free GeoWeb service, designed to help Calgary residents improve their home energy efficiency by visualizing the amount and location of waste heat leaving their homes and communities, as easily as clicking on their house in Google Maps. HEAT metrics are derived from 43 flight lines of TABI-1800 (Thermal Airborne Broadband Imager) data acquired on May 13--14, 2012 at night (11:00 pm--5:00 am) over The City of Calgary, Alberta (˜825 km 2) at a 50 cm spatial resolution and 0.05°C thermal resolution. At present, the only way to generate a large area, high-spatial resolution TIR scene is to acquire separate airborne flight lines and mosaic them together. However, the ambient sensed temperature within, and between flight lines naturally changes during acquisition (due to varying atmospheric and local micro-climate conditions), resulting in mosaicked images with different temperatures for the same scene components (e.g. roads, buildings), and mosaic join-lines arbitrarily bisect many thousands of homes. In combination these effects result in reduced utility and classification accuracy including, poorly defined HEAT Metrics, inaccurate hotspot detection and raw imagery that are difficult to interpret. In an effort to minimize these effects, three new semi/automated post-processing algorithms (the protocol) are described, which are then used to generate a 43 flight line mosaic of TABI-1800 data from which accurate Calgary waste heat maps and HEAT metrics can be generated. These algorithms (presented as four peer-reviewed papers)---are: (a) Thermal Urban Road Normalization (TURN)---used to mitigate the microclimatic variability within a thermal flight line based on varying road temperatures; (b) Automated Polynomial Relative Radiometric Normalization (RRN)---which mitigates the between flight line radiometric variability; and (c) Object Based Mosaicking (OBM)---which minimizes the geometric distortion along the mosaic edge between each flight line. A modified Emissivity Modulation technique is also described to correct H-res TIR images for emissivity. This combined radiometric and geometric post-processing protocol (i) increases the visual agreement between TABI-1800 flight lines, (ii) improves radiometric agreement within/between flight lines, (iii) produces a visually seamless mosaic, (iv) improves hot-spot detection and landcover classification accuracy, and (v) provides accurate data for thermal-based HEAT energy models. Keywords: Thermal Infrared, Post-Processing, High Spatial Resolution, Airborne, Thermal Urban Road Normalization (TURN), Relative Radiometric Normalization (RRN), Object Based Mosaicking (OBM), TABI-1800, HEAT, and Automation.

  12. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  13. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 08: Retrospective Dose Accumulation Workflow in Head and Neck Cancer Patients Using RayStation 4.5.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Olive; Chan, Biu; Moseley, Joanne

    Purpose: We have developed a semi-automated dose accumulation workflow for Head and Neck Cancer (HNC) patients to evaluate volumetric and dosimetric changes that take place during radiotherapy. This work will be used to assess how dosimetric changes affect both toxicity and disease control, hence inform the feasibility and design of a prospective HNC adaptive trial. Methods: RayStation 4.5.2 features deformable image registration (DIR), where structures already defined on the planning CT image set can be deformably mapped onto cone-beam computed tomography (CBCT) images, accounting for daily treatment set-up shifts and changes in patient anatomy. The daily delivered dose can bemore » calculated on each CBCT and mapped back to the planning CT to allow dose accumulation. The process is partially automated using Python scripts developed in collaboration with RaySearch. Results: To date we have performed dose accumulation on 18 HNC patients treated at our institution during 2013–2015 under REB approval. Our semi-automated process establishes clinical feasibility. Generally, dose accumulation for the entire treatment course of one case takes 60–120 minutes: importing all CBCTs requires 20–30 minutes as each patient has 30 to 40 treated fractions; image registration and dose accumulation require 60–90 minutes. This is in contrast to the process without automated scripts where dose accumulation alone would take 3–5 hours. Conclusions: We have developed a reliable workflow for retrospective dose tracking in HNC using RayStation. The process has been validated for HNC patients treated on both Elekta and Varian linacs with CBCTs acquired on XVI and OBI platforms respectively.« less

  14. A Python tool to set up relative free energy calculations in GROMACS

    PubMed Central

    Klimovich, Pavel V.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189

  15. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  16. Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.

    PubMed

    Han, Youkyung; Oh, Jaehong

    2018-05-17

    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

  17. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    NASA Astrophysics Data System (ADS)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  18. Generation and communication of dynamic maps using light projection

    NASA Astrophysics Data System (ADS)

    Busch, Steffen; Schlichting, Alexander; Brenner, Claus

    2018-05-01

    Many accidents are caused by miscommunication between traffic participants. Much research is being conducted in the area of car to car and car to infrastructure communication in order to eliminate this cause of accidents. How-ever, less attention is paid to the question how the behavior of a car can be communicated to pedestrians. Especially considering automated traffic, there is a lack of communication between cars and pedestrians. In this paper, we address the question how an autonomously driving car can inform pedestrians about its intentions. Especially in case of highly automated driving, making eye contact with a driver will give no clue about his or her intensions. We developed a prototype which continuously informs pedestrians about the intentions of the vehicle by projecting visual patterns onto the ground. Furthermore, the system communicates its interpretation of the observed situation to the pedestrians to warn them or to encourage them to perform a certain action. In order to communicate adaptively, the vehicle needs to develop an understanding of the dynamics of a city to know what to expect in certain situations and what speed is appropriate. To support this, we created a dynamic map, which estimates the number of pedestrians and cyclists in a certain area, which is then used to determine how `hazardous' the area is. This dynamic map is obtained from measurement data from many time instances, in contrast to the static car navigation maps, which are prevalent today. Apart from being used for communication purposes, the dynamic map can also influence the speed of a car, be it manually or autonomously driven. Adapting the speed in hazardous areas will avoid accidents where a car drives too fast, so that neither a human nor a computer-operated system would be able to stop in time.

  19. Snow-Cover Variability in North America in the 2000-2001 Winter as Determined from MODIS Snow Products

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Chien, Janet Y. L.; Houser, Paul R. (Technical Monitor)

    2001-01-01

    Moderate Resolution Imaging Spectroradiometer (MODIS) snow-cover maps have been available since September 13, 2000. These products, at 500 m spatial resolution, are available through the National Snow and Ice Data Center Distributed Active Archive Center in Boulder, Colorado. By the 2001-02 winter, 5 km climate-modeling grid (CMG) products will be available for presentation of global views of snow cover and for use in climate models. All MODIS snow-cover products are produced from automated algorithms that map snow in an objective manner. In this paper, we describe the MODIS snow products, and show snow maps from the fall of 2000 in North America.

  20. Snow-Cover Variability in North America in the 2000-2001 Winter as Determined from MODIS Snow Products

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Chien, Y. L.; Houser, Paul R. (Technical Monitor)

    2001-01-01

    Moderate Resolution Imaging Spectroradiometer (MODIS) snow-cover maps have been available since September 13, 2000. These products, at 500-m spatial resolution, are available through the National Snow and Ice Data Center Distributed Active Archive Center in Boulder, Colorado. By the 2001-02 winter, 5-km climate-modeling grid (CMG) products will be available for presentation of global views of snow cover and for use in climate models. All MODIS snow-cover products are produced from automated algorithms that map snow in an objective manner. In this paper, we describe the MODIS snow products, and show snow maps from the fall of 2000 in North America.

  1. Multipolarization radar images for geologic mapping and vegetation discrimination

    NASA Technical Reports Server (NTRS)

    Evans, D. L.; Farr, T. G.; Ford, J. P.; Thompson, T. W.; Werner, C. L.

    1986-01-01

    NASA has developed an airborne SAR that simultaneously yields image data in four linear polarizations in L-band with 10-m resolution over a swath of about 10 km. Signal data are recorded both optically and digitally and annotated in each of the channels to facilitate completely automated digital correlation. Comparison of the relative intensities of the different polarizations furnishes discriminatory mapping information. Local intensity variations in like-polarization images result from topographic effects, while strong cross polarization responses denote the effects of vegetation cover and, in some cases, possible scattering from the subsurface. In each of the areas studied, multiple polarization data led to the discrimination and mapping of unique surface unit features.

  2. A procedure for automated land use mapping using remotely sensed multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Whitley, S. L.

    1975-01-01

    A system of processing remotely sensed multispectral scanner data by computer programs to produce color-coded land use maps for large areas is described. The procedure is explained, the software and the hardware are described, and an analogous example of the procedure is presented. Detailed descriptions of the multispectral scanners currently in use are provided together with a summary of the background of current land use mapping techniques. The data analysis system used in the procedure and the pattern recognition software used are functionally described. Current efforts by the NASA Earth Resources Laboratory to evaluate operationally a less complex and less costly system are discussed in a separate section.

  3. Application of a simple cerebellar model to geologic surface mapping

    USGS Publications Warehouse

    Hagens, A.; Doveton, J.H.

    1991-01-01

    Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.

  4. Multiwavelength Characteristics of Microflares

    NASA Astrophysics Data System (ADS)

    Poduval, Bala; Schmelz, J. T.

    2016-10-01

    We present the multiwavelength characteristic of microflare detected in the SDO/AIA and IRIS images using the Automated Microevent-finding Code (AMC). We have catalogued independent events with information such as location on the disk, size, lifetime and peak flux, and obtained their frequency distribution. We mapped these events to other wavelengths, using their location information, to study their associated features, and infer the temperature characteristics and evolution. Moreover, we obtained their magnetic topologies by mapping the microflare locations on to the HMI photospheric magnetic field synoptic maps. Further, we analyzed the filtered brightness profiles and light curves for each event to classify them. Finally, we carried out a differential emission measure (DEM) analysis to study their temperature characteristics.

  5. Application of the 1:2,000,000-scale data base: A National Atlas sectional prototype

    USGS Publications Warehouse

    Dixon, Donna M.

    1985-01-01

    A study of the potential to produce a National Atlas sectional prototype from the 1:2,000,000-scale data base was concluded recently by the National Mapping Division, U. S. Geological Survey. This paper discusses the specific digital cartographic production procedures involved in the preparation of the prototype map, as well as the theoretical and practical cartographic framework for the study. Such items as data organization, data classification, digital techniques, data conversions, and modification of traditional design specifications for an automated environment are discussed. The bulk of the cartographic work for the production of the prototype was carried out in raster format on the Scitex Response-250 mapping system.

  6. Experimental philosophy leading to a small scale digital data base of the conterminous United States for designing experiments with remotely sensed data

    NASA Technical Reports Server (NTRS)

    Labovitz, M. L.; Masuoka, E. J.; Broderick, P. W.; Garman, T. R.; Ludwig, R. W.; Beltran, G. N.; Heyman, P. J.; Hooker, L. K.

    1983-01-01

    Research using satellite remotely sensed data, even within any single scientific discipline, often lacked a unifying principle or strategy with which to plan or integrate studies conducted over an area so large that exhaustive examination is infeasible, e.g., the U.S.A. However, such a series of studies would seem to be at the heart of what makes satellite remote sensing unique, that is the ability to select for study from among remotely sensed data sets distributed widely over the U.S., over time, where the resources do not exist to examine all of them. Using this philosophical underpinning and the concept of a unifying principle, an operational procedure for developing a sampling strategy and formal testable hypotheses was constructed. The procedure is applicable across disciplines, when the investigator restates the research question in symbolic form, i.e., quantifies it. The procedure is set within the statistical framework of general linear models. The dependent variable is any arbitrary function of remotely sensed data and the independent variables are values or levels of factors which represent regional climatic conditions and/or properties of the Earth's surface. These factors are operationally defined as maps from the U.S. National Atlas (U.S.G.S., 1970). Eighty-five maps from the National Atlas, representing climatic and surface attributes, were automated by point counting at an effective resolution of one observation every 17.6 km (11 miles) yielding 22,505 observations per map. The maps were registered to one another in a two step procedure producing a coarse, then fine scale registration. After registration, the maps were iteratively checked for errors using manual and automated procedures. The error free maps were annotated with identification and legend information and then stored as card images, one map to a file. A sampling design will be accomplished through a regionalization analysis of the National Atlas data base (presently being conducted). From this analysis a map of homogeneous regions of the U.S.A. will be created and samples (LANDSAT scenes) assigned by region.

  7. Enhancing the usability and performance of structured association mapping algorithms using automation, parallelization, and visualization in the GenAMap software system

    PubMed Central

    2012-01-01

    Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660

  8. Automated strip-mine and reclamation mapping from ERTS

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.

    1974-01-01

    The author has identified the following significant results. Computer processing techniques were applied to ERTS-1 computer-compatible tape (CCT) data acquired in August 1972 on the Ohio Power Company's coal mining operation in Muskingum County, Ohio. Processing results succeeded in automatically classifying, with an accuracy greater than 90%: (1) stripped earth and major sources of erosion; (2) partially reclaimed areas and minor sources of erosion; (3) water with sedimentation; (4) water without sedimentation; and (5) vegetation. Computer-generated tables listing the area in acres and square kilometers were produced for each target category. Processing results also included geometrically corrected map overlays, one for each target category, drawn on a transparent material by a pen under computer control. Each target category is assigned a distinctive color on the overlay to facilitate interpretation. The overlays, drawn at a scale of 1:250,000 when placed over an AMS map of the same area, immediately provided map locations for each target. These mapping products were generated at a tenth of the cost of conventional mapping techniques.

  9. Automating variable rate irrigation management prescriptions for center pivots from field data maps

    USDA-ARS?s Scientific Manuscript database

    Variable rate irrigation (VRI) enables center pivot systems to match irrigation application to non-uniform field needs. This technology has potential to improve application and water-use efficiency while reducing environmental impacts from excess runoff and poor water quality. Proper management of V...

  10. Wireless tracking of cotton modules Part II: automatic machine identification and system testing

    USDA-ARS?s Scientific Manuscript database

    Mapping the harvest location of cotton modules is essential to practical understanding and utilization of spatial-variability information in fiber quality. A wireless module-tracking system was recently developed, but automation of the system is required before it will find practical use on the far...

  11. NOAA Weather Radio

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME Search For Go NWS All NOAA Frequently Asked Questions This site offers a wealth of information on NOAA

  12. NOAA Weather Radio - All Hazards

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME Weather Service (NWS) warnings, watches, forecasts and other non-weather related hazard information 24

  13. Automated methodology for selecting hot and cold pixel for remote sensing based evapotranspiration mapping

    USDA-ARS?s Scientific Manuscript database

    Surface energy fluxes, especially the latent heat flux from evapotranspiration (ET), determine exchanges of energy and mass between the hydrosphere, atmosphere, and biosphere. There are numerous remote sensing-based energy balance approaches such as METRIC and SEBAL that use hot and cold pixels from...

  14. Procyon LLC: From Music Recommendations to Preference Mapping

    ERIC Educational Resources Information Center

    Chinn, Susan J.

    2011-01-01

    Procyon LLC had re-launched and renamed their music discovery site, Electra, to Capella, in 2008. Its core strength had originated from Electra's proprietary technology, which used music libraries from real people, its members, to generating "automated word-of-mouth" recommendations, targeted advertising and editorial content. With the re-launch,…

  15. Mining a Web Citation Database for Author Co-Citation Analysis.

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung

    2002-01-01

    Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…

  16. Semi-automated surface mapping via unsupervised classification

    NASA Astrophysics Data System (ADS)

    D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.

    2017-09-01

    Due to the increasing volume of the returned data from space mission, the human search for correlation and identification of interesting features becomes more and more unfeasible. Statistical extraction of features via machine learning methods will increase the scientific output of remote sensing missions and aid the discovery of yet unknown feature hidden in dataset. Those methods exploit algorithm trained on features from multiple instrument, returning classification maps that explore intra-dataset correlation, allowing for the discovery of unknown features. We present two applications, one for Mercury and one for Vesta.

  17. Geometric principles for constructing radar panoramas of the surface of Venus: Hypsometric features of the Moon and terrestrial planets

    NASA Technical Reports Server (NTRS)

    Rzhiga, O. N.; Tyuflin, Y. S.; Belenkiy, Y. G.; Rodionova, Z. F.; Dekhtyareva, K. I.

    1986-01-01

    The physographic curves of the moon and terrestrial planets, drawn both for the entire surface as a whole and for individual hemispheres, were compared to discover the common consistencies and individual features in the distribution of hypsometric levels. In 1983 to 1984 the automated interplanetary stations (AMS) Venera 15 and 16 made radar maps of the planet Venus. The synthesized images are the basic initial material for photogrammetric and catrographic processing to create maps of the Venus surface. These principles are discussed.

  18. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  19. U. S. GEOLOGICAL SURVEY LAND REMOTE SENSING ACTIVITIES.

    USGS Publications Warehouse

    Frederick, Doyle G.

    1983-01-01

    USGS uses all types of remotely sensed data, in combination with other sources of data, to support geologic analyses, hydrologic assessments, land cover mapping, image mapping, and applications research. Survey scientists use all types of remotely sensed data with ground verifications and digital topographic and cartographic data. A considerable amount of research is being done by Survey scientists on developing automated geographic information systems that can handle a wide variety of digital data. The Survey is also investigating the use of microprocessor computer systems for accessing, displaying, and analyzing digital data.

  20. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  1. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  2. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  3. Mapping Snow Depth with Automated Terrestrial Laser Scanning - Investigating Potential Applications

    NASA Astrophysics Data System (ADS)

    Adams, M. S.; Gigele, T.; Fromm, R.

    2017-11-01

    This contribution presents an automated terrestrial laser scanning (ATLS) setup, which was used during the winter 2016/17 to monitor the snow depth distribution on a NW-facing slope at a high-alpine study site. We collected data at high temporal [(sub-)daily] and spatial resolution (decimetre-range) over 0.8 km² with a Riegl LPM-321, set in a weather-proof glass fibre enclosure. Two potential ATLS-applications are investigated here: monitoring medium-sized snow avalanche events, and tracking snow depth change caused by snow drift. The results show the ATLS data's high explanatory power and versatility for different snow research questions.

  4. Applying machine learning to pattern analysis for automated in-design layout optimization

    NASA Astrophysics Data System (ADS)

    Cain, Jason P.; Fakhry, Moutaz; Pathak, Piyush; Sweis, Jason; Gennari, Frank; Lai, Ya-Chieh

    2018-04-01

    Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.

  5. Automated seamline detection along skeleton for remote sensing image mosaicking

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-08-01

    The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.

  6. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGES

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  7. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    NASA Astrophysics Data System (ADS)

    Barat, Christian; Phlypo, Ronald

    2010-12-01

    We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  8. Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kim, Noah; Pandya, Shishir A.

    2012-01-01

    Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.

  9. Automated deep-phenotyping of the vertebrate brain

    PubMed Central

    Allalou, Amin; Wu, Yuelong; Ghannad-Rezaie, Mostafa; Eimon, Peter M; Yanik, Mehmet Fatih

    2017-01-01

    Here, we describe an automated platform suitable for large-scale deep-phenotyping of zebrafish mutant lines, which uses optical projection tomography to rapidly image brain-specific gene expression patterns in 3D at cellular resolution. Registration algorithms and correlation analysis are then used to compare 3D expression patterns, to automatically detect all statistically significant alterations in mutants, and to map them onto a brain atlas. Automated deep-phenotyping of a mutation in the master transcriptional regulator fezf2 not only detects all known phenotypes but also uncovers important novel neural deficits that were overlooked in previous studies. In the telencephalon, we show for the first time that fezf2 mutant zebrafish have significant patterning deficits, particularly in glutamatergic populations. Our findings reveal unexpected parallels between fezf2 function in zebrafish and mice, where mutations cause deficits in glutamatergic neurons of the telencephalon-derived neocortex. DOI: http://dx.doi.org/10.7554/eLife.23379.001 PMID:28406399

  10. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  11. Development of management information system for land in mine area based on MapInfo

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu

    2008-10-01

    MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.

  12. Toward an operational framework for fine-scale urban land-cover mapping in Wallonia using submeter remote sensing and ancillary vector data

    NASA Astrophysics Data System (ADS)

    Beaumont, Benjamin; Grippa, Tais; Lennert, Moritz; Vanhuysse, Sabine; Stephenne, Nathalie; Wolff, Eléonore

    2017-07-01

    Encouraged by the EU INSPIRE directive requirements and recommendations, the Walloon authorities, similar to other EU regional or national authorities, want to develop operational land-cover (LC) and land-use (LU) mapping methods using existing geodata. Urban planners and environmental monitoring stakeholders of Wallonia have to rely on outdated, mixed, and incomplete LC and LU information. The current reference map is 10-years old. The two object-based classification methods, i.e., a rule- and a classifier-based method, for detailed regional urban LC mapping are compared. The added value of using the different existing geospatial datasets in the process is assessed. This includes the comparison between satellite and aerial optical data in terms of mapping accuracies, visual quality of the map, costs, processing, data availability, and property rights. The combination of spectral, tridimensional, and vector data provides accuracy values close to 0.90 for mapping the LC into nine categories with a minimum mapping unit of 15 m2. Such a detailed LC map offers opportunities for fine-scale environmental and spatial planning activities. Still, the regional application poses challenges regarding automation, big data handling, and processing time, which are discussed.

  13. Mapping land cover through time with the Rapid Land Cover Mapper—Documentation and user manual

    USGS Publications Warehouse

    Cotillon, Suzanne E.; Mathis, Melissa L.

    2017-02-15

    The Rapid Land Cover Mapper is an Esri ArcGIS® Desktop add-in, which was created as an alternative to automated or semiautomated mapping methods. Based on a manual photo interpretation technique, the tool facilitates mapping over large areas and through time, and produces time-series raster maps and associated statistics that characterize the changing landscapes. The Rapid Land Cover Mapper add-in can be used with any imagery source to map various themes (for instance, land cover, soils, or forest) at any chosen mapping resolution. The user manual contains all essential information for the user to make full use of the Rapid Land Cover Mapper add-in. This manual includes a description of the add-in functions and capabilities, and step-by-step procedures for using the add-in. The Rapid Land Cover Mapper add-in was successfully used by the U.S. Geological Survey West Africa Land Use Dynamics team to accurately map land use and land cover in 17 West African countries through time (1975, 2000, and 2013).

  14. Precise Ortho Imagery as the Source for Authoritative Airport Mapping

    NASA Astrophysics Data System (ADS)

    Howard, H.; Hummel, P.

    2016-06-01

    As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration) and ICAO (International Civil Aviation Organization) require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB) critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.

  15. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  16. Mapping Urban Ecosystem Services Using High Resolution Aerial Photography

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Neale, A.; Wilhelm, D.

    2010-12-01

    Ecosystem services (ES) are the many life-sustaining benefits we receive from nature: e.g., clean air and water, food and fiber, cultural-aesthetic-recreational benefits, pollination and flood control. The ES concept is emerging as a means of integrating complex environmental and economic information to support informed environmental decision making. The US EPA is developing a web-based National Atlas of Ecosystem Services, with a component for urban ecosystems. Currently, the only wall-to-wall, national scale land cover data suitable for this analysis is the National Land Cover Data (NLCD) at 30 m spatial resolution with 5 and 10 year updates. However, aerial photography is acquired at higher spatial resolution (0.5-3 m) and more frequently (1-5 years, typically) for most urban areas. Land cover was mapped in Raleigh, NC using freely available USDA National Agricultural Imagery Program (NAIP) with 1 m ground sample distance to test the suitability of aerial photography for urban ES analysis. Automated feature extraction techniques were used to extract five land cover classes, and an accuracy assessment was performed using standard techniques. Results will be presented that demonstrate applications to mapping ES in urban environments: greenways, corridors, fragmentation, habitat, impervious surfaces, dark and light pavement (urban heat island). Automated feature extraction results mapped over NAIP color aerial photograph. At this scale, we can look at land cover and related ecosystem services at the 2-10 m scale. Small features such as individual trees and sidewalks are visible and mappable. Classified aerial photo of Downtown Raleigh NC Red: impervious surface Dark Green: trees Light Green: grass Tan: soil

  17. Forest Cover Mapping in Iskandar Malaysia Using Satellite Data

    NASA Astrophysics Data System (ADS)

    Kanniah, K. D.; Mohd Najib, N. E.; Vu, T. T.

    2016-09-01

    Malaysia is the third largest country in the world that had lost forest cover. Therefore, timely information on forest cover is required to help the government to ensure that the remaining forest resources are managed in a sustainable manner. This study aims to map and detect changes of forest cover (deforestation and disturbance) in Iskandar Malaysia region in the south of Peninsular Malaysia between years 1990 and 2010 using Landsat satellite images. The Carnegie Landsat Analysis System-Lite (CLASlite) programme was used to classify forest cover using Landsat images. This software is able to mask out clouds, cloud shadows, terrain shadows, and water bodies and atmospherically correct the images using 6S radiative transfer model. An Automated Monte Carlo Unmixing technique embedded in CLASlite was used to unmix each Landsat pixel into fractions of photosynthetic vegetation (PV), non photosynthetic vegetation (NPV) and soil surface (S). Forest and non-forest areas were produced from the fractional cover images using appropriate threshold values of PV, NPV and S. CLASlite software was found to be able to classify forest cover in Iskandar Malaysia with only a difference between 14% (1990) and 5% (2010) compared to the forest land use map produced by the Department of Agriculture, Malaysia. Nevertheless, the CLASlite automated software used in this study was found not to exclude other vegetation types especially rubber and oil palm that has similar reflectance to forest. Currently rubber and oil palm were discriminated from forest manually using land use maps. Therefore, CLASlite algorithm needs further adjustment to exclude these vegetation and classify only forest cover.

  18. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    PubMed

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.

  19. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  20. Low Cost Multi-Sensor Robot Laser Scanning System and its Accuracy Investigations for Indoor Mapping Application

    NASA Astrophysics Data System (ADS)

    Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.

    2017-11-01

    In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.

  1. Design and Applications of Rapid Image Tile Producing Software Based on Mosaic Dataset

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Huang, W.; Wang, C.; Tang, D.; Zhu, L.

    2018-04-01

    Map tile technology is widely used in web geographic information services. How to efficiently produce map tiles is key technology for rapid service of images on web. In this paper, a rapid producing software for image tile data based on mosaic dataset is designed, meanwhile, the flow of tile producing is given. Key technologies such as cluster processing, map representation, tile checking, tile conversion and compression in memory are discussed. Accomplished by software development and tested by actual image data, the results show that this software has a high degree of automation, would be able to effectively reducing the number of IO and improve the tile producing efficiency. Moreover, the manual operations would be reduced significantly.

  2. EVALUATING HYDROLOGICAL RESPONSE TO FORECASTED LAND-USE CHANGE: SCENARIO TESTING WITH THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA) TOOL

    EPA Science Inventory

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions...

  3. Automated Network Mapping and Topology Verification

    DTIC Science & Technology

    2016-06-01

    collection of information includes amplifying data about the networked devices such as hardware details, logical addressing schemes, 7 operating ...collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations ...maximum 200 words) The current military reliance on computer networks for operational missions and administrative duties makes network

  4. NOAA Weather Radio - Voice of NWS

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME information 24 hours a day. Known as the "voice of the National Weather Service," NWR is provided as

  5. Spaces of Surveillance: Indexicality and Solicitation on the Internet.

    ERIC Educational Resources Information Center

    Elmer, Greg

    1997-01-01

    Investigates significance of the index in the process of mapping and formatting sites, spaces, and words on the Internet as well as diagnosing, tracking, and soliciting users. Argues that indexical technologies are increasingly called upon by commercial interests to automate the solicitation process whereby entry into an Internet site triggers the…

  6. Inventorying national forest resources...for planning-programing-budgeting system

    Treesearch

    Miles R. Hill; Elliot L. Amidon

    1968-01-01

    New systems for analyzing resource management problems, such as Planning-Programing-Budgeting, will require automated procedures to collect and assemble resource inventory data. A computer - oriented system called Map Information Assembly and Display System developed for this purpose was tested on a National Forest in California. It provided information on eight forest...

  7. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  8. Deep SOMs for automated feature extraction and classification from big data streaming

    NASA Astrophysics Data System (ADS)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  9. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  10. Open Standards in Practice: An OGC China Forum Initiative

    NASA Astrophysics Data System (ADS)

    Yue, Peng; Zhang, Mingda; Taylor, Trevor; Xie, Jibo; Zhang, Hongping; Tong, Xiaochong; Yu, Jinsongdi; Huang, Juntao

    2016-11-01

    Open standards like OGC standards can be used to improve interoperability and support machine-to-machine interaction over the Web. In the Big Data era, standard-based data and processing services from various vendors could be combined to automate the extraction of information and knowledge from heterogeneous and large volumes of geospatial data. This paper introduces an ongoing OGC China forum initiative, which will demonstrate how OGC standards can benefit the interaction among multiple organizations in China. The ability to share data and processing functions across organizations using standard services could change traditional manual interactions in their business processes, and provide on-demand decision support results by on-line service integration. In the initiative, six organizations are involved in two “MashUp” scenarios on disaster management. One “MashUp” is to derive flood maps in the Poyang Lake, Jiangxi. And the other one is to generate turbidity maps on demand in the East Lake, Wuhan, China. The two scenarios engage different organizations from the Chinese community by integrating sensor observations, data, and processing services from them, and improve the automation of data analysis process using open standards.

  11. a Model Study of Small-Scale World Map Generalization

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  12. What is missing? An operational inundation mapping framework by SAR data

    NASA Astrophysics Data System (ADS)

    Shen, X.; Anagnostou, E. N.; Zeng, Z.; Kettner, A.; Hong, Y.

    2017-12-01

    Compared to optical sensors, synthetic aperture radar (SAR) works all-day all-weather. In addition, its spatial resolution does not decrease with the height of the platform and is thus applicable to a range of important studies. However, existing studies did not address the operational demands of real-time inundation mapping. The direct proof is that no water body product exists for any SAR-based satellites. Then what is missing between science and products? Automation and quality. What makes it so difficult to develop an operational inundation mapping technique based on SAR data? Spectrum-wise, unlike optical water indices such as MNDWI, AWEI etc., where a relative constant threshold may apply across acquisition of images, regions and sensors, the threshold to separate water from non-water pixels in each SAR images has to be individually chosen. The optimization of the threshold is the first obstacle to the automation of the SAR data algorithm. Morphologically, the quality and reliability of the results have been compromised by over-detection caused by smooth surface and shadowing area, the noise-like speckle and under-detection caused by strong-scatter disturbance. In this study, we propose a three-step framework that addresses all aforementioned issues of operational inundation mapping by SAR data. The framework consists of 1) optimization of Wishart distribution parameters of single/dual/fully-polarized SAR data, 2) morphological removal of over-detection, and 3) machine-learning based removal of under-detection. The framework utilizes not only the SAR data, but also the synergy of digital elevation model (DEM), and optical sensor-based products of fine resolution, including the water probability map, land cover classification map (optional), and river width. The framework has been validated throughout multiple areas in different parts of the world using different satellite SAR data and globally available ancillary data products. Therefore, it has the potential to contribute as an operational inundation mapping algorithm to any SAR missions, such as SWOT, ALOS, Sentinel, etc. Selected results using ALOS/PALSAR-1 L-band dual polarized data around the Connecticut River is provided in the attached Figure.

  13. Automated customized retrieval of radiotherapy data for clinical trials, audit and research.

    PubMed

    Romanchikova, Marina; Harrison, Karl; Burnet, Neil G; Hoole, Andrew Cf; Sutcliffe, Michael Pf; Parker, Michael Andrew; Jena, Rajesh; Thomas, Simon James

    2018-02-01

    To enable fast and customizable automated collection of radiotherapy (RT) data from tomotherapy storage. Human-readable data maps (TagMaps) were created to generate DICOM-RT (Digital Imaging and Communications in Medicine standard for Radiation Therapy) data from tomotherapy archives, and provided access to "hidden" information comprising delivery sinograms, positional corrections and adaptive-RT doses. 797 data sets totalling 25,000 scans were batch-exported in 31.5 h. All archived information was restored, including the data not available via commercial software. The exported data were DICOM-compliant and compatible with major commercial tools including RayStation, Pinnacle and ProSoma. The export ran without operator interventions. The TagMap method for DICOM-RT data modelling produced software that was many times faster than the vendor's solution, required minimal operator input and delivered high volumes of vendor-identical DICOM data. The approach is applicable to many clinical and research data processing scenarios and can be adapted to recover DICOM-RT data from other proprietary storage types such as Elekta, Pinnacle or ProSoma. Advances in knowledge: A novel method to translate data from proprietary storage to DICOM-RT is presented. It provides access to the data hidden in electronic archives, offers a working solution to the issues of data migration and vendor lock-in and paves the way for large-scale imaging and radiomics studies.

  14. Spatially resolved proteome mapping of laser capture microdissected tissue with automated sample transfer to nanodroplets.

    PubMed

    Zhu, Ying; Dou, Maowei; Piehowski, Paul D; Liang, Yiran; Wang, Fangjun; Chu, Rosalie K; Chrisler, Will; Smith, Jordan N; Schwarz, Kaitlynn C; Shen, Yufeng; Shukla, Anil K; Moore, Ronald J; Smith, Richard D; Qian, Wei-Jun; Kelly, Ryan T

    2018-06-24

    Current mass spectrometry (MS)-based proteomics approaches are ineffective for mapping protein expression in tissue sections with high spatial resolution due to the limited overall sensitivity of conventional workflows. Here we report an integrated and automated method to advance spatially resolved proteomics by seamlessly coupling laser capture microdissection (LCM) with a recently developed nanoliter-scale sample preparation system termed nanoPOTS (Nanodroplet Processing in One pot for Trace Samples). The workflow is enabled by prepopulating nanowells with DMSO, which serves as a sacrificial capture liquid for microdissected tissues. The DMSO droplets efficiently collect laser-pressure catapulted LCM tissues as small as 20 µm in diameter with success rates >87%. We also demonstrate that tissue treatment with DMSO can significantly improve proteome coverage, likely due to its ability to dissolve lipids from tissue and enhance protein extraction efficiency. The LCM-nanoPOTS platform was able to identify 180, 695, and 1827 protein groups on average from 12-µm-thick rat brain cortex tissue sections with diameters of 50, 100, and 200 µm, respectively. We also analyzed 100-µm-diameter sections corresponding to 10-18 cells from three different regions of rat brain and comparatively quantified ~1000 proteins, demonstrating the potential utility for high-resolution spatially resolved mapping of protein expression in tissues. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  15. A grid matrix-based Raman spectroscopic method to characterize different cell milieu in biopsied axillary sentinel lymph nodes of breast cancer patients.

    PubMed

    Som, Dipasree; Tak, Megha; Setia, Mohit; Patil, Asawari; Sengupta, Amit; Chilakapati, C Murali Krishna; Srivastava, Anurag; Parmar, Vani; Nair, Nita; Sarin, Rajiv; Badwe, R

    2016-01-01

    Raman spectroscopy which is based upon inelastic scattering of photons has a potential to emerge as a noninvasive bedside in vivo or ex vivo molecular diagnostic tool. There is a need to improve the sensitivity and predictability of Raman spectroscopy. We developed a grid matrix-based tissue mapping protocol to acquire cellular-specific spectra that also involved digital microscopy for localizing malignant and lymphocytic cells in sentinel lymph node biopsy sample. Biosignals acquired from specific cellular milieu were subjected to an advanced supervised analytical method, i.e., cross-correlation and peak-to-peak ratio in addition to PCA and PC-LDA. We observed decreased spectral intensity as well as shift in the spectral peaks of amides and lipid bands in the completely metastatic (cancer cells) lymph nodes with high cellular density. Spectral library of normal lymphocytes and metastatic cancer cells created using the cellular specific mapping technique can be utilized to create an automated smart diagnostic tool for bench side screening of sampled lymph nodes. Spectral library of normal lymphocytes and metastatic cancer cells created using the cellular specific mapping technique can be utilized to develop an automated smart diagnostic tool for bench side screening of sampled lymph nodes supported by ongoing global research in developing better technology and signal and big data processing algorithms.

  16. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  17. Global mapping of infectious disease

    PubMed Central

    Hay, Simon I.; Battle, Katherine E.; Pigott, David M.; Smith, David L.; Moyes, Catherine L.; Bhatt, Samir; Brownstein, John S.; Collier, Nigel; Myers, Monica F.; George, Dylan B.; Gething, Peter W.

    2013-01-01

    The primary aim of this review was to evaluate the state of knowledge of the geographical distribution of all infectious diseases of clinical significance to humans. A systematic review was conducted to enumerate cartographic progress, with respect to the data available for mapping and the methods currently applied. The results helped define the minimum information requirements for mapping infectious disease occurrence, and a quantitative framework for assessing the mapping opportunities for all infectious diseases. This revealed that of 355 infectious diseases identified, 174 (49%) have a strong rationale for mapping and of these only 7 (4%) had been comprehensively mapped. A variety of ambitions, such as the quantification of the global burden of infectious disease, international biosurveillance, assessing the likelihood of infectious disease outbreaks and exploring the propensity for infectious disease evolution and emergence, are limited by these omissions. An overview of the factors hindering progress in disease cartography is provided. It is argued that rapid improvement in the landscape of infectious diseases mapping can be made by embracing non-conventional data sources, automation of geo-positioning and mapping procedures enabled by machine learning and information technology, respectively, in addition to harnessing labour of the volunteer ‘cognitive surplus’ through crowdsourcing. PMID:23382431

  18. Interactive visualization of Earth and Space Science computations

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise

    1994-01-01

    Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.

  19. The evolution of internet-based map server applications in the United States Department of Agriculture, Veterinary Services.

    PubMed

    Maroney, Susan A; McCool, Mary Jane; Geter, Kenneth D; James, Angela M

    2007-01-01

    The internet is used increasingly as an effective means of disseminating information. For the past five years, the United States Department of Agriculture (USDA) Veterinary Services (VS) has published animal health information in internet-based map server applications, each oriented to a specific surveillance or outbreak response need. Using internet-based technology allows users to create dynamic, customised maps and perform basic spatial analysis without the need to buy or learn desktop geographic information systems (GIS) software. At the same time, access can be restricted to authorised users. The VS internet mapping applications to date are as follows: Equine Infectious Anemia Testing 1972-2005, National Tick Survey tick distribution maps, the Emergency Management Response System-Mapping Module for disease investigations and emergency outbreaks, and the Scrapie mapping module to assist with the control and eradication of this disease. These services were created using Environmental Systems Research Institute (ESRI)'s internet map server technology (ArcIMS). Other leading technologies for spatial data dissemination are ArcGIS Server, ArcEngine, and ArcWeb Services. VS is prototyping applications using these technologies, including the VS Atlas of Animal Health Information using ArcGIS Server technology and the Map Kiosk using ArcEngine for automating standard map production in the case of an emergency.

  20. Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation

    NASA Astrophysics Data System (ADS)

    Qin, Wenjian; Wu, Jia; Han, Fei; Yuan, Yixuan; Zhao, Wei; Ibragimov, Bulat; Gu, Jia; Xing, Lei

    2018-05-01

    Segmentation of liver in abdominal computed tomography (CT) is an important step for radiation therapy planning of hepatocellular carcinoma. Practically, a fully automatic segmentation of liver remains challenging because of low soft tissue contrast between liver and its surrounding organs, and its highly deformable shape. The purpose of this work is to develop a novel superpixel-based and boundary sensitive convolutional neural network (SBBS-CNN) pipeline for automated liver segmentation. The entire CT images were first partitioned into superpixel regions, where nearby pixels with similar CT number were aggregated. Secondly, we converted the conventional binary segmentation into a multinomial classification by labeling the superpixels into three classes: interior liver, liver boundary, and non-liver background. By doing this, the boundary region of the liver was explicitly identified and highlighted for the subsequent classification. Thirdly, we computed an entropy-based saliency map for each CT volume, and leveraged this map to guide the sampling of image patches over the superpixels. In this way, more patches were extracted from informative regions (e.g. the liver boundary with irregular changes) and fewer patches were extracted from homogeneous regions. Finally, deep CNN pipeline was built and trained to predict the probability map of the liver boundary. We tested the proposed algorithm in a cohort of 100 patients. With 10-fold cross validation, the SBBS-CNN achieved mean Dice similarity coefficients of 97.31  ±  0.36% and average symmetric surface distance of 1.77  ±  0.49 mm. Moreover, it showed superior performance in comparison with state-of-art methods, including U-Net, pixel-based CNN, active contour, level-sets and graph-cut algorithms. SBBS-CNN provides an accurate and effective tool for automated liver segmentation. It is also envisioned that the proposed framework is directly applicable in other medical image segmentation scenarios.

  1. Scan-rescan precision of subchondral bone curvature maps from routine 3D DESS water excitation sequences: Data from the Osteoarthritis Initiative.

    PubMed

    Farber, Joshua M; Totterman, Saara M S; Martinez-Torteya, Antonio; Tamez-Peña, Jose G

    2016-02-01

    Subchondral bone (SCB) undergoes changes in the shape of the articulating bone surfaces and is currently recognized as a key target in osteoarthritis (OA) treatment. The aim of this study was to present an automated system that determines the curvature of the SCB regions of the knee and to evaluate its cross-sectional and longitudinal scan-rescan precision Six subjects with OA and six control subjects were selected from the Osteoarthritis Initiative (OAI) pilot study database. As per OAI protocol, these subjects underwent 3T MRI at baseline and every twelve months thereafter, including a 3D DESS WE sequence. We analyzed the baseline and twenty-four month images. Each subject was scanned twice at these visits, thus generating scan-rescan information. Images were segmented with an automated multi-atlas framework platform and then 3D renderings of the bone structure were created from the segmentations. Curvature maps were extracted from the 3D renderings and morphed into a reference atlas to determine precision, to generate population statistics, and to visualize cross-sectional and longitudinal curvature changes. The baseline scan-rescan root mean square error values ranged from 0.006mm(-1) to 0.013mm(-1), and from 0.007mm(-1) to 0.018mm(-1) for the SCB of the femur and the tibia, respectively. The standardized response of the mean of the longitudinal changes in curvature in these regions ranged from -0.09 to 0.02 and from -0.016 to 0.015, respectively. The fully automated system produces accurate and precise curvature maps of femoral and tibial SCB, and will provide a valuable tool for the analysis of the curvature changes of articulating bone surfaces during the course of knee OA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Sensitivity of subjective questionnaires to cognitive loading while driving with navigation aids: a pilot study.

    PubMed

    Smyth, Christopher C

    2007-05-01

    Developers of future forces are implementing automated aiding for driving tasks. In designing such systems, the effect of cognitive task interference on driving performance is important. The crew of such vehicles may have to occasionally perform communication and planning tasks while driving. Subjective questionnaires may aid researchers to parse out the sources of task interference in crew station designs. In this preliminary study, sixteen participants drove a vehicle simulator with automated road-turn cues (i.e., visual, audio, combined, or neither) along a course marked on a map display while replying to spoken test questions (i.e., repeating sentences, math and logical puzzles, route planning, or none) and reporting other vehicles in the scenario. Following each trial, a battery of subjective questionnaires was administered to determine the perceived effects of the loading on their cognitive functionality. Considering the performance, the participants drove significantly faster with the road-turn cues than with just the map. They recalled fewer vehicle sightings with the cognitive tests than without them. Questionnaire results showed that their reasoning was more straightforward, the quantity of information for understanding higher, and the trust greater with the combined cues than the map-only. They reported higher perceived workload with the cognitive tests. The capacity for maintaining situational awareness was reduced with the cognitive tests because of the increased division of attention and the increase in the instability, variability, and complexity of the demands. The association and intuitiveness of cognitive processing were lowest and the subjective stress highest for the route planning test. Finally, the confusability in reasoning was greater for the auditory cue with the route planning than the auditory cue without the cognitive tests. The subjective questionnaires are sensitive to the effects of the cognitive loading and, therefore, may be useful for guiding the development of automated aid designs.

  3. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.

  4. The Advanced Rapid Imaging and Analysis (ARIA) Project: Status of SAR products for Earthquakes, Floods, Volcanoes and Groundwater-related Subsidence

    NASA Astrophysics Data System (ADS)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Sacco, G. F.; Manipon, G.; Linick, J. P.; Fielding, E. J.; Lundgren, P.; Farr, T. G.; Webb, F.; Rosen, P. A.; Simons, M.

    2017-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating high-level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques including Interferometric Synthetic Aperture Radar (InSAR), differential Global Positioning System, and SAR-based change detection have become critical additions to our toolset for understanding and mapping the damage and deformation caused by earthquakes, volcanic eruptions, floods, landslides, and groundwater extraction. Up until recently, processing of these data sets has been handcrafted for each study or event and has not generated products rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by the California Institute of Technology and by NASA through the Jet Propulsion Laboratory, has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition to supporting the growing science and hazard response communities, the ARIA project has developed the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the influx of raw SAR data from geodetic imaging missions such as ESA's Sentinel-1A/B, now operating with repeat intervals as short as 6 days, and the upcoming NASA NISAR mission. We will present the progress and results we have made on automating the analysis of Sentinel-1A/B SAR data for hazard monitoring and response, with emphasis on recent developments and end user engagement in flood extent mapping and deformation time series for both volcano monitoring and mapping of groundwater-related subsidence

  5. Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography

    PubMed Central

    Gray Roncal, William; Prasad, Judy A.; Fernandes, Hugo L.; Gürsoy, Doga; De Andrade, Vincent; Fezzaa, Kamel; Xiao, Xianghui; Vogelstein, Joshua T.; Jacobsen, Chris; Körding, Konrad P.

    2017-01-01

    Methods for resolving the three-dimensional (3D) microstructure of the brain typically start by thinly slicing and staining the brain, followed by imaging numerous individual sections with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography (µCT) for producing mesoscale (∼1 µm 3 resolution) brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for µCT-based brain mapping that develops and integrates methods for sample preparation, imaging, and automated segmentation of cells, blood vessels, and myelinated axons, in addition to statistical analyses of these brain structures. Our results demonstrate that X-ray tomography achieves rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts. PMID:29085899

  6. Identifying UMLS concepts from ECG Impressions using KnowledgeMap

    PubMed Central

    Denny, Joshua C.; Spickard, Anderson; Miller, Randolph A; Schildcrout, Jonathan; Darbar, Dawood; Rosenbloom, S. Trent; Peterson, Josh F.

    2005-01-01

    Electrocardiogram (ECG) impressions represent a wealth of medical information for potential decision support and drug-effect discovery. Much of this information is inaccessible to automated methods in the free-text portion of the ECG report. We studied the application of the KnowledgeMap concept identifier (KMCI) to map Unified Medical Language System (UMLS) concepts from ECG impressions. ECGs were processed by KMCI and the results scored for accuracy by multiple raters. Reviewers also recorded unidentified concepts through the scoring interface. Overall, KMCI correctly identified 1059 out of 1171 concepts for a recall of 0.90. Precision, indicating the proportion of ECG concepts correctly identified, was 0.94. KMCI was particularly effective at identifying ECG rhythms (330/333), perfusion changes (65/66), and noncardiac medical concepts (11/11). In conclusion, KMCI is an effective method for mapping ECG impressions to UMLS concepts. PMID:16779029

  7. Infrastructure-Free Mapping and Localization for Tunnel-Based Rail Applications Using 2D Lidar

    NASA Astrophysics Data System (ADS)

    Daoust, Tyler

    This thesis presents an infrastructure-free mapping and localization framework for rail vehicles using only a lidar sensor. The method was designed to handle modern underground tunnels: narrow, parallel, and relatively smooth concrete walls. A sliding-window algorithm was developed to estimate the train's motion, using a Renyi's Quadratic Entropy (RQE)-based point-cloud alignment system. The method was tested with datasets gathered on a subway train travelling at high speeds, with 75 km of data across 14 runs, simulating 500 km of localization. The system was capable of mapping with an average error of less than 0.6 % by distance. It was capable of continuously localizing, relative to the map, to within 10 cm in stations and at crossovers, and 2.3 m in pathological sections of tunnel. This work has the potential to improve train localization in a tunnel, which can be used to increase capacity and for automation purposes.

  8. Automated selection of synthetic biology parts for genetic regulatory networks.

    PubMed

    Yaman, Fusun; Bhatia, Swapnil; Adler, Aaron; Densmore, Douglas; Beal, Jacob

    2012-08-17

    Raising the level of abstraction for synthetic biology design requires solving several challenging problems, including mapping abstract designs to DNA sequences. In this paper we present the first formalism and algorithms to address this problem. The key steps of this transformation are feature matching, signal matching, and part matching. Feature matching ensures that the mapping satisfies the regulatory relationships in the abstract design. Signal matching ensures that the expression levels of functional units are compatible. Finally, part matching finds a DNA part sequence that can implement the design. Our software tool MatchMaker implements these three steps.

  9. Automated recognition of microcalcification clusters in mammograms

    NASA Astrophysics Data System (ADS)

    Bankman, Isaac N.; Christens-Barry, William A.; Kim, Dong W.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-07-01

    The widespread and increasing use of mammographic screening for early breast cancer detection is placing a significant strain on clinical radiologists. Large numbers of radiographic films have to be visually interpreted in fine detail to determine the subtle hallmarks of cancer that may be present. We developed an algorithm for detecting microcalcification clusters, the most common and useful signs of early, potentially curable breast cancer. We describe this algorithm, which utilizes contour map representations of digitized mammographic films, and discuss its benefits in overcoming difficulties often encountered in algorithmic approaches to radiographic image processing. We present experimental analyses of mammographic films employing this contour-based algorithm and discuss practical issues relevant to its use in an automated film interpretation instrument.

  10. Normalized-Difference Snow Index (NDSI)

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Riggs, George A.

    2010-01-01

    The Normalized-Difference Snow Index (NDSI) has a long history. 'The use of ratioing visible (VIS) and near-infrared (NIR) or short-wave infrared (SWIR) channels to separate snow and clouds was documented in the literature beginning in the mid-1970s. A considerable amount of work on this subject was conducted at, and published by, the Air Force Geophysics Laboratory (AFGL). The objective of the AFGL work was to discriminate snow cover from cloud cover using an automated algorithm to improve global cloud analyses. Later, automated methods that relied on the VIS/NIR ratio were refined substantially using satellite data In this section we provide a brief history of the use of the NDSI for mapping snow cover.

  11. On automating domain connectivity for overset grids

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau; Meakin, Robert L.

    1995-01-01

    An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximate body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems. A dynamic hole expansion/contraction algorithm is also implemented to obtain optimum domain connectivity; however, it is tested only for geometry of generic shapes.

  12. Digital soils survey map of the Patagonia Mountains, Arizona

    USGS Publications Warehouse

    Norman, Laura; Wissler, Craig; Guertin, D. Phillip; Gray, Floyd

    2002-01-01

    The ‘Soil Survey of Santa Cruz and Parts of Cochise and Pima Counties, Arizona,' a product of the USDA’s Soil Conservation Service and the Forest Service in cooperation with the Arizona Agricultural Experiment Station, released in 1979, was created according to the site conditions in 1971, when soil scientists identified soils types on aerial photographs. The scale at which these maps were published is 1:20,000. These soil maps were automated for incorporation into the hydrologic modeling within a GIS. The aerial photos onto which the soils units were drawn had not been orthoganalized, and contained distortion. A total of 15 maps composed the study area. These maps were scanned into TIFF format using an 8-bit black and white drum scanner at 100 dpi. The images were imported into ERDAS IMAGINE and the white borders were removed through subset decollaring processes. Five CD-ROM’s containing Digital Orthophoto Quarter Quads (DOQQ’s) were used to register and rectify the scanned soils maps. Polygonal data was then attributed according to the datasets.

  13. Local search for optimal global map generation using mid-decadal landsat images

    USGS Publications Warehouse

    Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.

    2007-01-01

    NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

  14. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object-based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.

  15. An automated approach to mapping ecological sites using hyper-temporal remote sensing and SVM classification

    USDA-ARS?s Scientific Manuscript database

    The development of ecological sites as management units has emerged as a highly effective land management framework, but its utility has been limited by spatial ambiguity of ecological site locations in the U.S., lack of ecological site concepts in many other parts of the world, and the inability to...

  16. QTL examination of a bi-parental mapping population segregating for “short-stature” in hop (Humulus lupulus L.)

    USDA-ARS?s Scientific Manuscript database

    Increasing labor costs and reduced labor pools for hop production have resulted in the necessity to develop strategies to improve efficiency and automate hop production and harvest. One solution for reducing labor inputs is the use and production of “low-trellis” hop varieties optimized for mechani...

  17. U.S. NIC

    Science.gov Websites

    Graphs) IMS Ice Extent Data. IMS Ice Extent for sea ice only. Total Ice Sea Ice Only View chart (2200 x Hemisphere Automated Snow and Ice Mapping NOHRSC Satellite Products NCEP MMAB Sea Ice CPC Northern Hemisphere National Snow and Ice Data Center (NSIDC) ** Multisensor Analyzed Sea Ice Extent (NSIDC) ** The NRCS NWCC

  18. Proposal for a Spatial Organization Model in Soil Science (The Example of the European Communities Soil Map).

    ERIC Educational Resources Information Center

    King, D.; And Others

    1994-01-01

    Discusses the computational problems of automating paper-based spatial information. A new relational structure for soil science information based on the main conceptual concepts used during conventional cartographic work is proposed. This model is a computerized framework for coherent description of the geographical variability of soils, combined…

  19. Proceeding of the ACM/IEEE-CS Joint Conference on Digital Libraries (1st, Roanoke, Virginia, June 24-28, 2001).

    ERIC Educational Resources Information Center

    Association for Computing Machinery, New York, NY.

    Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…

  20. Comparison of Tasseled Cap-based Landsat data structures for use in forest disturbance detection.

    Treesearch

    Sean P. Healey; Warren B. Cohen; Yang Zhiqiang; Olga N. Krankina

    2005-01-01

    Landsat satellite data has become ubiquitous in regional-scale forest disturbance detection. The Tasseled Cap (TC) transformation for Landsat data has been used in several disturbance-mapping projects because of its ability to highlight relevant vegetation changes. We used an automated composite analysis procedure to test four multi-date variants of the TC...

  1. GEBCO-NF Alumni Team's entry for Shell Ocean Discovery XPRIZE. An innovative seafloor mapping system of an AUV integrated with the newly designed USV SEA-KIT.

    NASA Astrophysics Data System (ADS)

    Wigley, R. A.; Anderson, R.; Bazhenova, E.; Falconer, R. K. H.; Kearns, T.; Martin, T.; Minami, H.; Roperez, J.; Rosedee, A.; Ryzhov, I.; Sade, H.; Seeboruth, S.; Simpson, B.; Sumiyoshi, M.; Tinmouth, N.; Zarayskaya, Y.; Zwolak, K.

    2017-12-01

    The international team of Nippon Foundation/GEBCO Alumni was formed to compete in the Shell Ocean Discovery XPRIZE competition. The aim of the Team is to build an innovative seafloor mapping system, not only to successfully compete in the XPRIZE challenge, but also to make a step towards autonomously mapping the complex global seafloor at resolutions not achievable by standard surface mapping systems. This new technology is linked to goals of the recently announced Nippon Foundation-GEBCO Seabed 2030 Project, aiming in highest possible resolution bathymetric mapping of global World Ocean floor by 2030. The mapping system is composed of three main elements: an Unmanned Surface Vessel (USV), an Autonomous Underwater Vehicle (AUV) and an on-shore control station. A newly designed, USV, called SEA-KIT, was be built to interact with any AUV, acting as remote surface access to the deep ocean. The major function of the SEA-KIT in the system design is 1) the potential transportation of a commercially available AUV to and from the launch site to the survey site and 2) the deployment and recovery of the AUV. In further development stages, options for AUV charging and data transfer are considered. Additionally, the SEA-KIT will offer a positioning solution during AUV operations, utilizing an Ultra Short Base Line (USBL) acoustic system. The data acquisition platform (AUV) is equipped with a high-end technology interferometric sonar with synthetic aperture options, providing the possibility of collecting bathymetric data co-registered with seafloor object imagery. An automated data processing workflow is highly desirable due to the large amount of data collected during each mission. The processing workflow is being designed to be as autonomous as possible and an algorithm for automated data processing onboard are being considered to reduce the time of data processing and make a final products available as soon as possible after the completion of data collection. No human intervention on site is required for the operation of data collection using the integrated USV and AUV mapping system. The on-shore control station only plays a supervision role and is able to assess the USV performance, while AUV works autonomously, according to a previously set survey plan. This leads to lower-risk, less-effort deep ocean mapping.

  2. Automated Purification of Recombinant Proteins: Combining High-throughput with High Yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chiann Tso; Moore, Priscilla A.; Auberry, Deanna L.

    2006-05-01

    Protein crystallography, mapping protein interactions and other approaches of current functional genomics require not only purifying large numbers of proteins but also obtaining sufficient yield and homogeneity for downstream high-throughput applications. There is a need for the development of robust automated high-throughput protein expression and purification processes to meet these requirements. We developed and compared two alternative workflows for automated purification of recombinant proteins based on expression of bacterial genes in Escherichia coli: First - a filtration separation protocol based on expression of 800 ml E. coli cultures followed by filtration purification using Ni2+-NTATM Agarose (Qiagen). Second - a smallermore » scale magnetic separation method based on expression in 25 ml cultures of E.coli followed by 96-well purification on MagneHisTM Ni2+ Agarose (Promega). Both workflows provided comparable average yields of proteins about 8 ug of purified protein per unit of OD at 600 nm of bacterial culture. We discuss advantages and limitations of the automated workflows that can provide proteins more than 90 % pure in the range of 100 ug – 45 mg per purification run as well as strategies for optimization of these protocols.« less

  3. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  4. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  5. Reaction schemes visualized in network form: the syntheses of strychnine as an example.

    PubMed

    Proudfoot, John R

    2013-05-24

    Representation of synthesis sequences in a network form provides an effective method for the comparison of multiple reaction schemes and an opportunity to emphasize features such as reaction scale that are often relegated to experimental sections. An example of data formatting that allows construction of network maps in Cytoscape is presented, along with maps that illustrate the comparison of multiple reaction sequences, comparison of scaffold changes within sequences, and consolidation to highlight common key intermediates used across sequences. The 17 different synthetic routes reported for strychnine are used as an example basis set. The reaction maps presented required a significant data extraction and curation, and a standardized tabular format for reporting reaction information, if applied in a consistent way, could allow the automated combination of reaction information across different sources.

  6. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  7. NaviCell Web Service for network-based data visualization.

    PubMed

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei

    2015-07-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. NaviCell Web Service for network-based data visualization

    PubMed Central

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei

    2015-01-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393

  9. Automated Snow Extent Mapping Based on Orthophoto Images from Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Spallek, Waldemar; Witek-Kasprzak, Matylda

    2018-04-01

    The paper presents the application of the k-means clustering in the process of automated snow extent mapping using orthophoto images generated using the Structure-from-Motion (SfM) algorithm from oblique aerial photographs taken by unmanned aerial vehicle (UAV). A simple classification approach has been implemented to discriminate between snow-free and snow-covered terrain. The procedure uses the k-means clustering and classifies orthophoto images based on the three-dimensional space of red-green-blue (RGB) or near-infrared-red-green (NIRRG) or near-infrared-green-blue (NIRGB) bands. To test the method, several field experiments have been carried out, both in situations when snow cover was continuous and when it was patchy. The experiments have been conducted using three fixed-wing UAVs (swinglet CAM by senseFly, eBee by senseFly, and Birdie by FlyTech UAV) on 10/04/2015, 23/03/2016, and 16/03/2017 within three test sites in the Izerskie Mountains in southwestern Poland. The resulting snow extent maps, produced automatically using the classification method, have been validated against real snow extents delineated through a visual analysis and interpretation offered by human analysts. For the simplest classification setup, which assumes two classes in the k-means clustering, the extent of snow patches was estimated accurately, with areal underestimation of 4.6% (RGB) and overestimation of 5.5% (NIRGB). For continuous snow cover with sparse discontinuities at places where trees or bushes protruded from snow, the agreement between automatically produced snow extent maps and observations was better, i.e. 1.5% (underestimation with RGB) and 0.7-0.9% (overestimation, either with RGB or with NIRRG). Shadows on snow were found to be mainly responsible for the misclassification.

  10. Comparison of Object-Based Image Analysis Approaches to Mapping New Buildings in Accra, Ghana Using Multi-Temporal QuickBird Satellite Imagery

    PubMed Central

    Tsai, Yu Hsin; Stow, Douglas; Weeks, John

    2013-01-01

    The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810

  11. Mapping slope movements in Alpine environments using TerraSAR-X interferometric methods

    NASA Astrophysics Data System (ADS)

    Barboux, Chloé; Strozzi, Tazio; Delaloye, Reynald; Wegmüller, Urs; Collet, Claude

    2015-11-01

    Mapping slope movements in Alpine environments is an increasingly important task in the context of climate change and natural hazard management. We propose the detection, mapping and inventorying of slope movements using different interferometric methods based on TerraSAR-X satellite images. Differential SAR interferograms (DInSAR), Persistent Scatterer Interferometry (PSI), Short-Baseline Interferometry (SBAS) and a semi-automated texture image analysis are presented and compared in order to determine their contribution for the automatic detection and mapping of slope movements of various velocity rates encountered in Alpine environments. Investigations are conducted in a study region of about 6 km × 6 km located in the Western Swiss Alps using a unique large data set of 140 DInSAR scenes computed from 51 summer TerraSAR-X (TSX) acquisitions from 2008 to 2012. We found that PSI is able to precisely detect only points moving with velocities below 3.5 cm/yr in the LOS, with a root mean squared error of about 0.58 cm/yr compared to DGPS records. SBAS employed with 11 days summer interferograms increases the range of detectable movements to rates up to 35 cm/yr in the LOS with a root mean squared error of 6.36 cm/yr, but inaccurate measurements due to phase unwrapping are already possible for velocity rates larger than 20 cm/year. With the semi-automated texture image analysis the rough estimation of the velocity rates over an outlined moving zone is accurate for rates of "cm/day", "dm/month" and "cm/month", but due to the decorrelation of yearly TSX interferograms this method fails for the observation of slow movements in the "cm/yr" range.

  12. An assessment of the Height Above Nearest Drainage terrain descriptor for the thematic enhancement of automatic SAR-based flood monitoring services

    NASA Astrophysics Data System (ADS)

    Chow, Candace; Twele, André; Martinis, Sandro

    2016-10-01

    Flood extent maps derived from Synthetic Aperture Radar (SAR) data can communicate spatially-explicit information in a timely and cost-effective manner to support disaster management. Automated processing chains for SAR-based flood mapping have the potential to substantially reduce the critical time delay between the delivery of post-event satellite data and the subsequent provision of satellite derived crisis information to emergency management authorities. However, the accuracy of SAR-based flood mapping can vary drastically due to the prevalent land cover and topography of a given scene. While expert-based image interpretation with the consideration of contextual information can effectively isolate flood surface features, a fully-automated feature differentiation algorithm mainly based on the grey levels of a given pixel is comparatively more limited for features with similar SAR-backscattering characteristics. The inclusion of ancillary data in the automatic classification procedure can effectively reduce instances of misclassification. In this work, a near-global `Height Above Nearest Drainage' (HAND) index [10] was calculated with digital elevation data and drainage directions from the HydroSHEDS mapping project [2]. The index can be used to separate flood-prone regions from areas with a low probability of flood occurrence. Based on the HAND-index, an exclusion mask was computed to reduce water look-alikes with respect to the hydrologictopographic setting. The applicability of this near-global ancillary data set for the thematic improvement of Sentinel-1 and TerraSAR-X based services for flood and surface water monitoring has been validated both qualitatively and quantitatively. Application of a HAND-based exclusion mask resulted in improvements to the classification accuracy of SAR scenes with high amounts of water look-alikes and considerable elevation differences.

  13. Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components

    PubMed Central

    Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah

    2017-01-01

    Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627

  14. Failure mode and effect analysis oriented to risk-reduction interventions in intraoperative electron radiation therapy: the specific impact of patient transportation, automation, and treatment planning availability.

    PubMed

    López-Tarjuelo, Juan; Bouché-Babiloni, Ana; Santos-Serra, Agustín; Morillo-Macías, Virginia; Calvo, Felipe A; Kubyshin, Yuri; Ferrer-Albiach, Carlos

    2014-11-01

    Industrial companies use failure mode and effect analysis (FMEA) to improve quality. Our objective was to describe an FMEA and subsequent interventions for an automated intraoperative electron radiotherapy (IOERT) procedure with computed tomography simulation, pre-planning, and a fixed conventional linear accelerator. A process map, an FMEA, and a fault tree analysis are reported. The equipment considered was the radiance treatment planning system (TPS), the Elekta Precise linac, and TN-502RDM-H metal-oxide-semiconductor-field-effect transistor in vivo dosimeters. Computerized order-entry and treatment-automation were also analyzed. Fifty-seven potential modes and effects were identified and classified into 'treatment cancellation' and 'delivering an unintended dose'. They were graded from 'inconvenience' or 'suboptimal treatment' to 'total cancellation' or 'potentially wrong' or 'very wrong administered dose', although these latter effects were never experienced. Risk priority numbers (RPNs) ranged from 3 to 324 and totaled 4804. After interventions such as double checking, interlocking, automation, and structural changes the final total RPN was reduced to 1320. FMEA is crucial for prioritizing risk-reduction interventions. In a semi-surgical procedure like IOERT double checking has the potential to reduce risk and improve quality. Interlocks and automation should also be implemented to increase the safety of the procedure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    NASA Astrophysics Data System (ADS)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  16. Comparison of human septal nuclei MRI measurements using automated segmentation and a new manual protocol based on histology

    PubMed Central

    Butler, Tracy; Zaborszky, Laszlo; Pirraglia, Elizabeth; Li, Jinyu; Wang, Xiuyuan Hugh; Li, Yi; Tsui, Wai; Talos, Delia; Devinsky, Orrin; Kuchna, Izabela; Nowicki, Krzysztof; French, Jacqueline; Kuzniecky, Rubin; Wegiel, Jerzy; Glodzik, Lidia; Rusinek, Henry; DeLeon, Mony J.; Thesen, Thomas

    2014-01-01

    Septal nuclei, located in basal forebrain, are strongly connected with hippocampi and important in learning and memory, but have received limited research attention in human MRI studies. While probabilistic maps for estimating septal volume on MRI are now available, they have not been independently validated against manual tracing of MRI, typically considered the gold standard for delineating brain structures. We developed a protocol for manual tracing of the human septal region on MRI based on examination of neuroanatomical specimens. We applied this tracing protocol to T1 MRI scans (n=86) from subjects with temporal epilepsy and healthy controls to measure septal volume. To assess the inter-rater reliability of the protocol, a second tracer used the same protocol on 20 scans that were randomly selected from the 72 healthy controls. In addition to measuring septal volume, maximum septal thickness between the ventricles was measured and recorded. The same scans (n=86) were also analysed using septal probabilistic maps and Dartel toolbox in SPM. Results show that our manual tracing algorithm is reliable, and that septal volume measurements obtained via manual and automated methods correlate significantly with each other (p<001). Both manual and automated methods detected significantly enlarged septal nuclei in patients with temporal lobe epilepsy in accord with a proposed compensatory neuroplastic process related to the strong connections between septal nuclei and hippocampi. Septal thickness, which was simple to measure with excellent inter-rater reliability, correlated well with both manual and automated septal volume, suggesting it could serve as an easy-to-measure surrogate for septal volume in future studies. Our results call attention to the important though understudied human septal region, confirm its enlargement in temporal lobe epilepsy, and provide a reliable new manual delineation protocol that will facilitate continued study of this critical region. PMID:24736183

  17. Comparison of human septal nuclei MRI measurements using automated segmentation and a new manual protocol based on histology.

    PubMed

    Butler, Tracy; Zaborszky, Laszlo; Pirraglia, Elizabeth; Li, Jinyu; Wang, Xiuyuan Hugh; Li, Yi; Tsui, Wai; Talos, Delia; Devinsky, Orrin; Kuchna, Izabela; Nowicki, Krzysztof; French, Jacqueline; Kuzniecky, Rubin; Wegiel, Jerzy; Glodzik, Lidia; Rusinek, Henry; deLeon, Mony J; Thesen, Thomas

    2014-08-15

    Septal nuclei, located in basal forebrain, are strongly connected with hippocampi and important in learning and memory, but have received limited research attention in human MRI studies. While probabilistic maps for estimating septal volume on MRI are now available, they have not been independently validated against manual tracing of MRI, typically considered the gold standard for delineating brain structures. We developed a protocol for manual tracing of the human septal region on MRI based on examination of neuroanatomical specimens. We applied this tracing protocol to T1 MRI scans (n=86) from subjects with temporal epilepsy and healthy controls to measure septal volume. To assess the inter-rater reliability of the protocol, a second tracer used the same protocol on 20 scans that were randomly selected from the 72 healthy controls. In addition to measuring septal volume, maximum septal thickness between the ventricles was measured and recorded. The same scans (n=86) were also analyzed using septal probabilistic maps and DARTEL toolbox in SPM. Results show that our manual tracing algorithm is reliable, and that septal volume measurements obtained via manual and automated methods correlate significantly with each other (p<.001). Both manual and automated methods detected significantly enlarged septal nuclei in patients with temporal lobe epilepsy in accord with a proposed compensatory neuroplastic process related to the strong connections between septal nuclei and hippocampi. Septal thickness, which was simple to measure with excellent inter-rater reliability, correlated well with both manual and automated septal volume, suggesting it could serve as an easy-to-measure surrogate for septal volume in future studies. Our results call attention to the important though understudied human septal region, confirm its enlargement in temporal lobe epilepsy, and provide a reliable new manual delineation protocol that will facilitate continued study of this critical region. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing

    PubMed Central

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283

  19. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    PubMed

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.

  20. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less

  1. Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.; Howard, Andrew B.; Aghazarian, Hrand; Rankin, Arturo L.

    2012-01-01

    PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning.

  2. Application of the Deformation Information System for automated analysis and mapping of mining terrain deformations - case study from SW Poland

    NASA Astrophysics Data System (ADS)

    Blachowski, Jan; Grzempowski, Piotr; Milczarek, Wojciech; Nowacka, Anna

    2015-04-01

    Monitoring, mapping and modelling of mining induced terrain deformations are important tasks for quantifying and minimising threats that arise from underground extraction of useful minerals and affect surface infrastructure, human safety, the environment and security of the mining operation itself. The number of methods and techniques used for monitoring and analysis of mining terrain deformations is wide and expanding with the progress in geographical information technologies. These include for example: terrestrial geodetic measurements, Global Navigation Satellite Systems, remote sensing, GIS based modelling and spatial statistics, finite element method modelling, geological modelling, empirical modelling using e.g. the Knothe theory, artificial neural networks, fuzzy logic calculations and other. The presentation shows the results of numerical modelling and mapping of mining terrain deformations for two cases of underground mining sites in SW Poland, hard coal one (abandoned) and copper ore (active) using the functionalities of the Deformation Information System (DIS) (Blachowski et al, 2014 @ http://meetingorganizer.copernicus.org/EGU2014/EGU2014-7949.pdf). The functionalities of the spatial data modelling module of DIS have been presented and its applications in modelling, mapping and visualising mining terrain deformations based on processing of measurement data (geodetic and GNSS) for these two cases have been characterised and compared. These include, self-developed and implemented in DIS, automation procedures for calculating mining terrain subsidence with different interpolation techniques, calculation of other mining deformation parameters (i.e. tilt, horizontal displacement, horizontal strain and curvature), as well as mapping mining terrain categories based on classification of the values of these parameters as used in Poland. Acknowledgments. This work has been financed from the National Science Centre Project "Development of a numerical method of mining ground deformation modelling in complex geological and mining conditions" UMO-2012/07/B/ST10/04297 executed at the Faculty of Geoengineering, Mining and Geology of the Wroclaw University of Technology (Poland).

  3. Planning Robotic Manipulation Strategies for Sliding Objects

    NASA Astrophysics Data System (ADS)

    Peshkin, Michael A.

    Automated planning of grasping or manipulation requires an understanding of both the physics and the geometry of manipulation, and a representation of that knowledge which facilitates the search for successful strategies. We consider manipulation on a level conveyor belt or tabletop, on which a part may slide when touched by a robot. Manipulation plans for a given part must succeed in the face of two types of uncertainty: that of the details of surfaces in contact, and that of the initial configuration of the part. In general the points of contact between the part and the surface it slides on will be unknown, so the motion of the part in response to a push cannot be predicted exactly. Using a simple variational principle (which is derived), we find the set of possible motions of a part for a given push, for all collections of points of contact. The answer emerges as a locus of centers of rotation (CORs). Manipulation plans made using this locus will succeed despite unknown details of contact. Results of experimental tests of the COR loci are presented. Uncertainty in the initial configuration of a part is usually also present. To plan in the presence of uncertainty, configuration maps are defined, which map all configurations of a part before an elementary operation to all possible outcomes, thus encapsulating the physics and geometry of the operation. The configuration map for an operation sequence is a product of configuration maps of elementary operations. Using COR loci we compute configuration maps for elementary sliding operations. Appropriate search techniques are applied to find operation sequences which succeed in the presence of uncertainty in the initial configuration and unknown details of contact. Such operation sequences may be used as parts feeder designs or as manipulation or grasping strategies for robots. As an example we demonstrate the automated design of a class of passive parts feeders consisting of multiple sequential fences across a conveyor belt.

  4. The role of failure modes and effects analysis in showing the benefits of automation in the blood bank.

    PubMed

    Han, Tae Hee; Kim, Moon Jung; Kim, Shinyoung; Kim, Hyun Ok; Lee, Mi Ae; Choi, Ji Seon; Hur, Mina; St John, Andrew

    2013-05-01

    Failure modes and effects analysis (FMEA) is a risk management tool used by the manufacturing industry but now being applied in laboratories. Teams from six South Korean blood banks used this tool to map their manual and automated blood grouping processes and determine the risk priority numbers (RPNs) as a total measure of error risk. The RPNs determined by each of the teams consistently showed that the use of automation dramatically reduced the RPN compared to manual processes. In addition, FMEA showed where the major risks occur in each of the manual processes and where attention should be prioritized to improve the process. Despite no previous experience with FMEA, the teams found the technique relatively easy to use and the subjectivity associated with assigning risk numbers did not affect the validity of the data. FMEA should become a routine technique for improving processes in laboratories. © 2012 American Association of Blood Banks.

  5. Automation process for morphometric analysis of volumetric CT data from pulmonary vasculature in rats.

    PubMed

    Shingrani, Rahul; Krenz, Gary; Molthen, Robert

    2010-01-01

    With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.

  6. Automated alignment of a reconfigurable optical system using focal-plane sensing and Kalman filtering.

    PubMed

    Fang, Joyce; Savransky, Dmitry

    2016-08-01

    Automation of alignment tasks can provide improved efficiency and greatly increase the flexibility of an optical system. Current optical systems with automated alignment capabilities are typically designed to include a dedicated wavefront sensor. Here, we demonstrate a self-aligning method for a reconfigurable system using only focal plane images. We define a two lens optical system with 8 degrees of freedom. Images are simulated given misalignment parameters using ZEMAX software. We perform a principal component analysis on the simulated data set to obtain Karhunen-Loève modes, which form the basis set whose weights are the system measurements. A model function, which maps the state to the measurement, is learned using nonlinear least-squares fitting and serves as the measurement function for the nonlinear estimator (extended and unscented Kalman filters) used to calculate control inputs to align the system. We present and discuss simulated and experimental results of the full system in operation.

  7. Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar

    USGS Publications Warehouse

    Buscombe, Daniel D.

    2017-01-01

    In recent years, lightweight, inexpensive, vessel-mounted ‘recreational grade’ sonar systems have rapidly grown in popularity among aquatic scientists, for swath imaging of benthic substrates. To promote an ongoing ‘democratization’ of acoustical imaging of shallow water environments, methods to carry out geometric and radiometric correction and georectification of sonar echograms are presented, based on simplified models for sonar-target geometry and acoustic backscattering and attenuation in shallow water. Procedures are described for automated removal of the acoustic shadows, identification of bed-water interface for situations when the water is too turbid or turbulent for reliable depth echosounding, and for automated bed substrate classification based on singlebeam full-waveform analysis. These methods are encoded in an open-source and freely-available software package, which should further facilitate use of recreational-grade sidescan sonar, in a fully automated and objective manner. The sequential correction, mapping, and analysis steps are demonstrated using a data set from a shallow freshwater environment.

  8. 1990 censuses to increase use of automation.

    PubMed

    Ward, S E

    1988-12-01

    This article summarizes information from selected reports presented at the 12th Population Census Conference. Ward reports that plans for the 1990 census in many countries of Asia and the Pacific call for increased use of automation, with applications ranging from the use of computer-generated maps of enumeration areas and optical mark readers for data processing to desktop publishing and electronic mail for disseminating the results. Recent advances in automation offer opportunities for improved accuracy and speed of census operations while reducing the need for clerical personnel. Most of the technologies discussed at the 12th Population Census are designed to make the planning, editing, processing, analysis, and publication of census data more reliable and efficient. However, technology alone cannot overcome high rates of illiteracy that preclude having respondents complete the census forms themselves. But it enables even China, India, Indonesia and Pakistan - the countries with huge population and limited financial resources - to make significant improvements in their forthcoming censuses.

  9. Drawing road networks with focus regions.

    PubMed

    Haunert, Jan-Henrik; Sering, Leon

    2011-12-01

    Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region--such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems. © 2011 IEEE

  10. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  11. Automated liver elasticity calculation for 3D MRE

    NASA Astrophysics Data System (ADS)

    Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.

    2017-03-01

    Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.

  12. Matching disease and phenotype ontologies in the ontology alignment evaluation initiative.

    PubMed

    Harrow, Ian; Jiménez-Ruiz, Ernesto; Splendiani, Andrea; Romacker, Martin; Woollard, Peter; Markel, Scott; Alam-Faruque, Yasmin; Koch, Martin; Malone, James; Waaler, Arild

    2017-12-02

    The disease and phenotype track was designed to evaluate the relative performance of ontology matching systems that generate mappings between source ontologies. Disease and phenotype ontologies are important for applications such as data mining, data integration and knowledge management to support translational science in drug discovery and understanding the genetics of disease. Eleven systems (out of 21 OAEI participating systems) were able to cope with at least one of the tasks in the Disease and Phenotype track. AML, FCA-Map, LogMap(Bio) and PhenoMF systems produced the top results for ontology matching in comparison to consensus alignments. The results against manually curated mappings proved to be more difficult most likely because these mapping sets comprised mostly subsumption relationships rather than equivalence. Manual assessment of unique equivalence mappings showed that AML, LogMap(Bio) and PhenoMF systems have the highest precision results. Four systems gave the highest performance for matching disease and phenotype ontologies. These systems coped well with the detection of equivalence matches, but struggled to detect semantic similarity. This deserves more attention in the future development of ontology matching systems. The findings of this evaluation show that such systems could help to automate equivalence matching in the workflow of curators, who maintain ontology mapping services in numerous domains such as disease and phenotype.

  13. Automated Plantation Mapping in Indonesia Using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Jia, X.; Khandelwal, A.; Kumar, V.

    2017-12-01

    Plantation mapping is critical for understanding and addressing deforestation, a key driver of climate change and ecosystem degradation. Unfortunately, most plantation maps are limited to small areas for specific years because they rely on visual inspection of imagery. In this work, we propose a data-driven approach which automatically generates yearly plantation maps for large regions using MODIS multi-spectral data. While traditional machine learning algorithms face manifold challenges in this task, e.g. imperfect training labels, spatio-temporal data heterogeneity, noisy and high-dimensional data, lack of evaluation data, etc., we introduce a novel deep learning-based framework that combines existing imperfect plantation products as training labels and models the spatio-temporal relationships of land covers. We also explores the post-processing steps based on Hidden Markov Model that further improve the detection accuracy. Then we conduct extensive evaluation of the generated plantation maps. Specifically, by randomly sampling and comparing with high-resolution Digital Globe imagery, we demonstrate that the generated plantation maps achieve both high precision and high recall. When compared with existing plantation mapping products, our detection can avoid both false positives and false negatives. Finally, we utilize the generated plantation maps in analyzing the relationship between forest fires and growth of plantations, which assists in better understanding the cause of deforestation in Indonesia.

  14. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    PubMed

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  15. Predictive Modeling and Mapping of Fish Distributions in Small Streams of the Canadian Rocky Mountain Foothills

    NASA Astrophysics Data System (ADS)

    McCleary, R. J.; Hassan, M. A.

    2006-12-01

    An automated procedure was developed to model spatial fish distributions within small streams in the Foothills of Alberta. Native fish populations and their habitats are susceptible to impacts arising from both industrial forestry and rapid development of petroleum resources in the region. Knowledge of fish distributions and the effects of industrial activities on their habitats is required to help conserve native fish populations. Resource selection function (RSF) models were used to explain presence/absence of fish in small streams. Target species were bull trout, rainbow trout and non-native brook trout. Using GIS, the drainage network was divided into reaches with uniform slope and drainage area and then polygons for each reach were created. Predictor variables described stream size, stream energy, climate and land-use. We identified a set of candidate models and selected the best model using a standard Akaike Information Criteria approach. The best models were validated with two external data sets. Drainage area and basin slope parameters were included in all best models. This finding emphasizes the importance of controlling for the energy dimension at the basin scale in investigations into the effects of land-use on aquatic resources in this transitional landscape between the mountains and plains. The best model for bull trout indicated a relation between the presence of artificial migration barriers in downstream areas and the extirpation of the species from headwater reaches. We produced reach-scale maps by species and summarized this information within all small catchments across the 12,000 km2 study area. These maps had included three categories based on predicted probability of capture for individual reaches. The high probability category had a 78 percent accuracy for correctly predicting both fish present and fish not-present reaches. Basin scale maps highlight specific watersheds likely to support both native bull trout and invasive brook trout, while reach-scale maps indicate specific reaches where interactions between these two species are likely to occur. With regional calibration, this automated modeling and mapping procedure could apply in headwater catchments throughout the Rocky Mountain Foothills and other areas where sporadic waterfalls or other natural migration barriers are not an important feature limiting fish distribution.

  16. Development of an expert analysis tool based on an interactive subsidence hazard map for urban land use in the city of Celaya, Mexico

    NASA Astrophysics Data System (ADS)

    Alloy, A.; Gonzalez Dominguez, F.; Nila Fonseca, A. L.; Ruangsirikulchai, A.; Gentle, J. N., Jr.; Cabral, E.; Pierce, S. A.

    2016-12-01

    Land Subsidence as a result of groundwater extraction in central Mexico's larger urban centers initiated in the 80's as a result of population and economic growth. The city of Celaya has undergone subsidence for a few decades and a consequence is the development of an active normal fault system that affects its urban infrastructure and residential areas. To facilitate its analysis and a land use decision-making process we created an online interactive map enabling users to easily obtain information associated with land subsidence. Geological and socioeconomic data of the city was collected, including fault location, population data, and other important infrastructure and structural data has been obtained from fieldwork as part of a study abroad interchange undergraduate course. The subsidence and associated faulting hazard map was created using an InSAR derived subsidence velocity map and population data from INEGI to identify hazard zones using a subsidence gradient spatial analysis approach based on a subsidence gradient and population risk matrix. This interactive map provides a simple perspective of different vulnerable urban elements. As an accessible visualization tool, it will enhance communication between scientific and socio-economic disciplines. Our project also lays the groundwork for a future expert analysis system with an open source and easily accessible Python coded, SQLite database driven website which archives fault and subsidence data along with visual damage documentation to civil structures. This database takes field notes and provides an entry form for uniform datasets, which are used to generate a JSON. Such a database is useful because it allows geoscientists to have a centralized repository and access to their observations over time. Because of the widespread presence of the subsidence phenomena throughout cities in central Mexico, the spatial analysis has been automated using the open source software R. Raster, rgeos, shapefiles, and rgdal libraries have been used to develop the script which permits to obtain the raster maps of horizontal gradient and population density. An advantage is that this analysis can be automated for periodic updates or repurposed for similar analysis in other cities, providing an easily accessible tool for land subsidence hazard assessments.

  17. A Python tool to set up relative free energy calculations in GROMACS.

    PubMed

    Klimovich, Pavel V; Mobley, David L

    2015-11-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper (LOMAP; Liu et al. in J Comput Aided Mol Des 27(9):755-770, 2013), recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge (Mobley et al. in J Comput Aided Mol Des 28(4):135-150, 2014). Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations.

  18. Towards automated mapping of lake ice using RADARSAT-2 and simulated RCM compact polarimetric data

    NASA Astrophysics Data System (ADS)

    Duguay, Claude

    2016-04-01

    The Canadian Ice Service (CIS) produces a weekly ice fraction product (a text file with a single lake-wide ice fraction value, in tenth, estimated for about 140 large lakes across Canada and northern United States) created from the visual interpretation of RADARSAT-2 ScanSAR dual-polarization (HH and HV) imagery, complemented by optical satellite imagery (AVHRR, MODIS and VIIRS). The weekly ice product is generated in support of the Canadian Meteorological Centre (CMC) needs for lake ice coverage in their operational numerical weather prediction model. CIS is interested in moving from its current (manual) way of generating the ice fraction product to a largely automated process. With support from the Canadian Space Agency, a project was recently initiated to assess the potential of polarimetric SAR data for lake ice cover mapping in light of the upcoming RADARSAT Constellation Mission (to be launched in 2018). The main objectives of the project are to evaluate: 1) state-of-the-art image segmentation algorithms and 2) RADARSAT-2 polarimetric and simulated RADARSAT Constellation Mission (RCM) compact polarimetric SAR data for ice/open water discrimination. The goal is to identify the best segmentation algorithm and non-polarimetric/polarimetric parameters for automated lake ice monitoring at CIS. In this talk, we will present the background and context of the study as well as initial results from the analysis of RADARSAT-2 Standard Quad-Pol data acquired during the break-up and freeze-up periods of 2015 on Great Bear Lake, Northwest Territories.

  19. Automated and model-based assembly of an anamorphic telescope

    NASA Astrophysics Data System (ADS)

    Holters, Martin; Dirks, Sebastian; Stollenwerk, Jochen; Loosen, Peter

    2018-02-01

    Since the first usage of optical glasses there has been an increasing demand for optical systems which are highly customized for a wide field of applications. To meet the challenge of the production of so many unique systems, the development of new techniques and approaches has risen in importance. However, the assembly of precision optical systems with lot sizes of one up to a few tens of systems is still dominated by manual labor. In contrast, highly adaptive and model-based approaches may offer a solution for manufacturing with a high degree of automation and high throughput while maintaining high precision. In this work a model-based automated assembly approach based on ray-tracing is presented. This process runs autonomously, and accounts for a wide range of functionality. It firstly identifies the sequence for an optimized assembly and secondly, generates and matches intermediate figures of merit to predict the overall optical functionality of the optical system. This process also takes into account the generation of a digital twin of the optical system, by mapping key-performance-indicators like the first and the second momentum of intensity into the optical model. This approach is verified by the automatic assembly of an anamorphic telescope within an assembly cell. By continuous measuring and mapping the key-performance-indicators into the optical model, the quality of the digital twin is determined. Moreover, by measuring the optical quality and geometrical parameters of the telescope, the precision of this approach is determined. Finally, the productivity of the process is evaluated by monitoring the speed of the different steps of the process.

  20. Hybrid geomorphological maps as the basis for assessing geoconservation potential in Lech, Vorarlberg (Austria)

    NASA Astrophysics Data System (ADS)

    Seijmonsbergen, Harry; de Jong, Mat; Anders, Niels; de Graaff, Leo; Cammeraat, Erik

    2013-04-01

    Geoconservation potential is, in our approach, closely linked to the spatial distribution of geomorphological sites and thus, geomorphological inventories. Detailed geomorphological maps are translated, using a standardized workflow, into polygonal maps showing the potential geoconservation value of landforms. A new development is to semi-automatically extract in a GIS geomorphological information from high resolution topographical data, such as LiDAR, and combine this with conventional data types (e.g. airphotos, geological maps) into geomorphological maps. Such hybrid digital geomorphological maps are also easily translated into digital information layers which show the geoconservation potential in an area. We present a protocol for digital geomorphological mapping illustrated with an example for the municipality of Lech in Vorarlberg (Austria). The protocol consists of 5 steps: 1. data preparation, 2. generating training and validation samples, 3. parameterization, 4. feature extraction, and 5. assessing classification accuracy. The resulting semi-automated digital geomorphological map is then further validated, in two ways. Firstly, the map is manually checked with the help of a series of digital datasets (e.g. airphotos) in a digital 3D environment, such as ArcScene. The second validation is field visit, which preferably occurs in parallel to the digital evaluation, so that updates are quickly achieved. The final digital and coded geomorphological information layer is converted into a potential geoconservation map by weighting and ranking the landforms based on four criteria: scientific relevance, frequency of occurrence, disturbance, and environmental vulnerability. The criteria with predefined scores for the various landform types are stored in a separate GIS attribute table, which is joined to the attribute table of the hybrid geomorphological information layer in an automated procedure. The results of the assessment can be displayed as the potential geoconservation map or as GeoPDF in a separate information layer. The Lech example highlights the problems ski resorts in a fragile high-alpine mountain environment are facing. The ongoing development poses a challenge to the communities. Which place do the high-ranking potential geoconservation sites get in the landscape planning and management? Must they be sacrificed to the economic benefits of winter tourism or, conversely, can their value be exploited in summer tourism - or is their intrinsic value enough to justify protection? Our method is transparent, takes into account the total landscape, and allows for rapid updating of the geodatabase. Evaluating the change in geoconservation potential over time, as a consequence of expansion of infrastructure or change in intensity of natural processes, is possible. In addition, model scenarios can be run to assess the impact of man-induced change on the potential geoconservation value of landforms.

  1. High throughput light absorber discovery, Part 2: Establishing structure–band gap energy relationships

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Zhou, Lan; ...

    2016-09-23

    Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4V 1.5Fe 0.5O 10.5 as a light absorber with direct band gap near 2.7 eV. Here, the strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platformmore » for identifying new optical materials.« less

  2. High throughput light absorber discovery, Part 2: Establishing structure–band gap energy relationships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suram, Santosh K.; Newhouse, Paul F.; Zhou, Lan

    Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4V 1.5Fe 0.5O 10.5 as a light absorber with direct band gap near 2.7 eV. Here, the strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platformmore » for identifying new optical materials.« less

  3. High Throughput Light Absorber Discovery, Part 2: Establishing Structure-Band Gap Energy Relationships.

    PubMed

    Suram, Santosh K; Newhouse, Paul F; Zhou, Lan; Van Campen, Douglas G; Mehta, Apurva; Gregoire, John M

    2016-11-14

    Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4 V 1.5 Fe 0.5 O 10.5 as a light absorber with direct band gap near 2.7 eV. The strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platform for identifying new optical materials.

  4. Proceedings of the international conference on cybernetics and societ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    This book presents the papers given at a conference on artificial intelligence, expert systems and knowledge bases. Topics considered at the conference included automating expert system development, modeling expert systems, causal maps, data covariances, robot vision, image processing, multiprocessors, parallel processing, VLSI structures, man-machine systems, human factors engineering, cognitive decision analysis, natural language, computerized control systems, and cybernetics.

  5. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    PubMed

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. HRTEMFringeAnalyzer a free python module for an automated analysis of fringe pattern in transmission electron micrographs.

    PubMed

    Alxneit, Ivo

    2018-03-30

    A python module (HRTEMFringeAnalyzer) is reported to evaluate the local crystallinity of samples from high-resolution transmission electron microscopy images in a mostly automated fashion. The user only selects the size of a square analyser window and a step size which translates the window in the micrograph. Together they define the resolution of the results obtained. Regions where fringe patterns are visible are identified and their lattice spacing d and direction ϕ as well as the corresponding mean errors σ determined. 1/σd is proportional to the coherence length of the structure, whereas σφ is a measure of how well the direction of the fringes is defined. Maps of these four indicators are computed. The performance of the program is demonstrated on two very different samples: ill-crystalline carbon deposits on a coked Ni/LFNO (reduced LaFe 0.8 Ni 0.2 O3±δ) catalyst and well-crystallized nanoparticles of zinc doped ceria. In the latter case, the automatic segmentation of large aggregates into individual crystalline domains is achieved by ϕ maps. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  7. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    NASA Astrophysics Data System (ADS)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  8. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  9. Synthetic Aperture Radar (SAR)-based paddy rice monitoring system: Development and application in key rice producing areas in Tropical Asia

    NASA Astrophysics Data System (ADS)

    Setiyono, T. D.; Holecz, F.; Khan, N. I.; Barbieri, M.; Quicho, E.; Collivignarelli, F.; Maunahan, A.; Gatti, L.; Romuga, G. C.

    2017-01-01

    Reliable and regular rice information is essential part of many countries’ national accounting process but the existing system may not be sufficient to meet the information demand in the context of food security and policy. Synthetic Aperture Radar (SAR) imagery is highly suitable for detecting lowland paddy rice, especially in tropical region where pervasive cloud cover in the rainy seasons limits the use of optical imagery. This study uses multi-temporal X-band and C-band SAR imagery, automated image processing, rule-based classification and field observations to classify rice in multiple locations across Tropical Asia and assimilate the information into ORYZA Crop Growth Simulation model (CGSM) to generate high resolution yield maps. The resulting cultivated rice area maps had classification accuracies above 85% and yield estimates were within 81-93% agreement against district level reported yields. The study sites capture much of the diversity in water management, crop establishment and rice maturity durations and the study demonstrates the feasibility of rice detection, yield monitoring, and damage assessment in case of climate disaster at national and supra-national scales using multi-temporal SAR imagery combined with CGSM and automated methods.

  10. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    PubMed Central

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  11. Active Site Mapping of Xylan-Deconstructing Enzymes with Arabinoxylan Oligosaccharides Produced by Automated Glycan Assembly.

    PubMed

    Senf, Deborah; Ruprecht, Colin; de Kruijff, Goswinus H M; Simonetti, Sebastian O; Schuhmacher, Frank; Seeberger, Peter H; Pfrengle, Fabian

    2017-03-02

    Xylan-degrading enzymes are crucial for the deconstruction of hemicellulosic biomass, making the hydrolysis products available for various industrial applications such as the production of biofuel. To determine the substrate specificities of these enzymes, we prepared a collection of complex xylan oligosaccharides by automated glycan assembly. Seven differentially protected building blocks provided the basis for the modular assembly of 2-substituted, 3-substituted, and 2-/3-substituted arabino- and glucuronoxylan oligosaccharides. Elongation of the xylan backbone relied on iterative additions of C4-fluorenylmethoxylcarbonyl (Fmoc) protected xylose building blocks to a linker-functionalized resin. Arabinofuranose and glucuronic acid residues have been selectively attached to the backbone using fully orthogonal 2-(methyl)naphthyl (Nap) and 2-(azidomethyl)benzoyl (Azmb) protecting groups at the C2 and C3 hydroxyls of the xylose building blocks. The arabinoxylan oligosaccharides are excellent tools to map the active site of glycosyl hydrolases involved in xylan deconstruction. The substrate specificities of several xylanases and arabinofuranosidases were determined by analyzing the digestion products after incubation of the oligosaccharides with glycosyl hydrolases. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Rouleaux red blood cells splitting in microscopic thin blood smear images via local maxima, circles drawing, and mapping with original RBCs.

    PubMed

    Rehman, Amjad; Abbas, Naveed; Saba, Tanzila; Mahmood, Toqeer; Kolivand, Hoshang

    2018-04-10

    Splitting the rouleaux RBCs from single RBCs and its further subdivision is a challenging area in computer-assisted diagnosis of blood. This phenomenon is applied in complete blood count, anemia, leukemia, and malaria tests. Several automated techniques are reported in the state of art for this task but face either under or over splitting problems. The current research presents a novel approach to split Rouleaux red blood cells (chains of RBCs) precisely, which are frequently observed in the thin blood smear images. Accordingly, this research address the rouleaux splitting problem in a realistic, efficient and automated way by considering the distance transform and local maxima of the rouleaux RBCs. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles by mid-point circle algorithm. The resulting circles are further mapped with single RBC in Rouleaux to preserve its original shape. The results of the proposed approach on standard data set are presented and analyzed statistically by achieving an average recall of 0.059, an average precision of 0.067 and F-measure 0.063 are achieved through ground truth with visual inspection. © 2018 Wiley Periodicals, Inc.

  13. A Free Database of Auto-detected Full-sun Coronal Hole Maps

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2016-12-01

    We present a 4-yr (06/10/2010 to 08/18/14 at 6-hr cadence) database of full-sun synchronic EUV and coronal hole (CH) maps made available on a dedicated web site (http://www.predsci.com/chd). The maps are generated using STEREO/EUVI A&B 195Å and SDO/AIA 193Å images through an automated pipeline (Caplan et al, (2016) Ap.J. 823, 53).Specifically, the original data is preprocessed with PSF-deconvolution, a nonlinear limb-brightening correction, and a nonlinear inter-instrument intensity normalization. Coronal holes are then detected in the preprocessed images using a GPU-accelerated region growing segmentation algorithm. The final results from all three instruments are then merged and projected to form full-sun sine-latitude maps. All the software used in processing the maps is provided, which can easily be adapted for use with other instruments and channels. We describe the data pipeline and show examples from the database. We also detail recent CH-detection validation experiments using synthetic EUV emission images produced from global thermodynamic MHD simulations.

  14. Nondestructive Evaluation of Concrete Bridge Decks with Automated Acoustic Scanning System and Ground Penetrating Radar.

    PubMed

    Sun, Hongbin; Pashoutani, Sepehr; Zhu, Jinying

    2018-06-16

    Delamanintions and reinforcement corrosion are two common problems in concrete bridge decks. No single nondestructive testing method (NDT) is able to provide comprehensive characterization of these defects. In this work, two NDT methods, acoustic scanning and Ground Penetrating Radar (GPR), were used to image a straight concrete bridge deck and a curved intersection ramp bridge. An acoustic scanning system has been developed for rapid delamination mapping. The system consists of metal-ball excitation sources, air-coupled sensors, and a GPS positioning system. The acoustic scanning results are presented as a two-dimensional image that is based on the energy map in the frequency range of 0.5⁻5 kHz. The GPR scanning results are expressed as the GPR signal attenuation map to characterize concrete deterioration and reinforcement corrosion. Signal processing algorithms for both methods are discussed. Delamination maps from the acoustic scanning are compared with deterioration maps from the GPR scanning on both bridges. The results demonstrate that combining the acoustic and GPR scanning results will provide a complementary and comprehensive evaluation of concrete bridge decks.

  15. Object-based image analysis for cadastral mapping using satellite images

    NASA Astrophysics Data System (ADS)

    Kohli, D.; Crommelinck, S.; Bennett, R.; Koeva, M.; Lemmen, C.

    2017-10-01

    Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.

  16. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  17. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  18. Mineral Potential in India Using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) Data

    NASA Astrophysics Data System (ADS)

    Oommen, T.; Chatterjee, S.

    2017-12-01

    NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.

  19. A regional ionospheric TEC mapping technique over China and adjacent areas on the basis of data assimilation

    NASA Astrophysics Data System (ADS)

    Aa, Ercha; Huang, Wengeng; Yu, Shimei; Liu, Siqing; Shi, Liqin; Gong, Jiancun; Chen, Yanhong; Shen, Hua

    2015-06-01

    In this paper, a regional total electron content (TEC) mapping technique over China and adjacent areas (70°E-140°E and 15°N-55°N) is developed on the basis of a Kalman filter data assimilation scheme driven by Global Navigation Satellite Systems (GNSS) data from the Crustal Movement Observation Network of China and International GNSS Service. The regional TEC maps can be generated accordingly with the spatial and temporal resolution being 1°×1° and 5 min, respectively. The accuracy and quality of the TEC mapping technique have been validated through the comparison with GNSS observations, the International Reference Ionosphere model values, the global ionosphere maps from Center for Orbit Determination of Europe, and the Massachusetts Institute of Technology Automated Processing of GPS TEC data from Madrigal database. The verification results indicate that great systematic improvements can be obtained when data are assimilated into the background model, which demonstrates the effectiveness of this technique in providing accurate regional specification of the ionospheric TEC over China and adjacent areas.

  20. Pathview: an R/Bioconductor package for pathway-based data integration and visualization.

    PubMed

    Luo, Weijun; Brouwer, Cory

    2013-07-15

    Pathview is a novel tool set for pathway-based data integration and visualization. It maps and renders user data on relevant pathway graphs. Users only need to supply their data and specify the target pathway. Pathview automatically downloads the pathway graph data, parses the data file, maps and integrates user data onto the pathway and renders pathway graphs with the mapped data. Although built as a stand-alone program, Pathview may seamlessly integrate with pathway and functional analysis tools for large-scale and fully automated analysis pipelines. The package is freely available under the GPLv3 license through Bioconductor and R-Forge. It is available at http://bioconductor.org/packages/release/bioc/html/pathview.html and at http://Pathview.r-forge.r-project.org/. luo_weijun@yahoo.com Supplementary data are available at Bioinformatics online.

  1. High Throughput T Epitope Mapping and Vaccine Development

    PubMed Central

    Li Pira, Giuseppina; Ivaldi, Federico; Moretti, Paolo; Manca, Fabrizio

    2010-01-01

    Mapping of antigenic peptide sequences from proteins of relevant pathogens recognized by T helper (Th) and by cytolytic T lymphocytes (CTL) is crucial for vaccine development. In fact, mapping of T-cell epitopes provides useful information for the design of peptide-based vaccines and of peptide libraries to monitor specific cellular immunity in protected individuals, patients and vaccinees. Nevertheless, epitope mapping is a challenging task. In fact, large panels of overlapping peptides need to be tested with lymphocytes to identify the sequences that induce a T-cell response. Since numerous peptide panels from antigenic proteins are to be screened, lymphocytes available from human subjects are a limiting factor. To overcome this limitation, high throughput (HTP) approaches based on miniaturization and automation of T-cell assays are needed. Here we consider the most recent applications of the HTP approach to T epitope mapping. The alternative or complementary use of in silico prediction and experimental epitope definition is discussed in the context of the recent literature. The currently used methods are described with special reference to the possibility of applying the HTP concept to make epitope mapping an easier procedure in terms of time, workload, reagents, cells and overall cost. PMID:20617148

  2. High resolution critical habitat mapping and classification of tidal freshwater wetlands in the ACE Basin

    NASA Astrophysics Data System (ADS)

    Strickland, Melissa Anne

    In collaboration with the South Carolina Department of Natural Resources ACE Basin National Estuarine Research Reserve (ACE Basin NERR), the tidal freshwater ecosystems along the South Edisto River in the ACE Basin are being accurately mapped and classified using a LIDAR-Remote Sensing Fusion technique that integrates LAS LIDAR data into texture images and then merges the elevation textures and multispectral imagery for very high resolution mapping. This project discusses the development and refinement of an ArcGIS Toolbox capable of automating protocols and procedures for marsh delineation and microhabitat identification. The result is a high resolution habitat and land use map used for the identification of threatened habitat. Tidal freshwater wetlands are also a critical habitat for colonial wading birds and an accurate assessment of community diversity and acreage of this habitat type in the ACE Basin will support SCDNR's conservation and protection efforts. The maps developed by this study will be used to better monitor the freshwater/saltwater interface and establish a baseline for an ACE NERR monitoring program to track the rates and extent of alterations due to projected environmental stressors. Preliminary ground-truthing in the field will provide information about the accuracy of the mapping tool.

  3. Topographic mapping data semantics through data conversion and enhancement: Chapter 7

    USGS Publications Warehouse

    Varanka, Dalia; Carter, Jonathan; Usery, E. Lynn; Shoberg, Thomas; Edited by Ashish, Naveen; Sheth, Amit P.

    2011-01-01

    This paper presents research on the semantics of topographic data for triples and ontologies to blend the capabilities of the Semantic Web and The National Map of the U.S. Geological Survey. Automated conversion of relational topographic data of several geographic sample areas to the triple data model standard resulted in relatively poor semantic associations. Further research employed vocabularies of feature type and spatial relation terms. A user interface was designed to model the capture of non-standard terms relevant to public users and to map those terms to existing data models of The National Map through the use of ontology. Server access for the study area triple stores was made publicly available, illustrating how the development of linked data may transform institutional policies to open government data resources to the public. This paper presents these data conversion and research techniques that were tested as open linked data concepts leveraged through a user-centered interface and open USGS server access to the public.

  4. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    PubMed

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  5. Column ratio mapping: a processing technique for atomic resolution high-angle annular dark-field (HAADF) images.

    PubMed

    Robb, Paul D; Craven, Alan J

    2008-12-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  6. Case study of rotating sonar sensor application in unmanned automated guided vehicle

    NASA Astrophysics Data System (ADS)

    Chandak, Pravin; Cao, Ming; Hall, Ernest L.

    2001-10-01

    A single rotating sonar element is used with a restricted angle of sweep to obtain readings to develop a range map for the unobstructed path of an autonomous guided vehicle (AGV). A Polaroid ultrasound transducer element is mounted on a micromotor with an encoder feedback. The motion of this motor is controlled using a Galil DMC 1000 motion control board. The encoder is interfaced with the DMC 1000 board using an intermediate IMC 1100 break-out board. By adjusting the parameters of the Polaroid element, it is possible to obtain range readings at known angles with respect to the center of the robot. The readings are mapped to obtain a range map of the unobstructed path in front of the robot. The idea can be extended to a 360 degree mapping by changing the assembly level programming on the Galil Motion control board. Such a system would be compact and reliable over a range of environments and AGV applications.

  7. G2S: a web-service for annotating genomic variants on 3D protein structures.

    PubMed

    Wang, Juexin; Sheridan, Robert; Sumer, S Onur; Schultz, Nikolaus; Xu, Dong; Gao, Jianjiong

    2018-06-01

    Accurately mapping and annotating genomic locations on 3D protein structures is a key step in structure-based analysis of genomic variants detected by recent large-scale sequencing efforts. There are several mapping resources currently available, but none of them provides a web API (Application Programming Interface) that supports programmatic access. We present G2S, a real-time web API that provides automated mapping of genomic variants on 3D protein structures. G2S can align genomic locations of variants, protein locations, or protein sequences to protein structures and retrieve the mapped residues from structures. G2S API uses REST-inspired design and it can be used by various clients such as web browsers, command terminals, programming languages and other bioinformatics tools for bringing 3D structures into genomic variant analysis. The webserver and source codes are freely available at https://g2s.genomenexus.org. g2s@genomenexus.org. Supplementary data are available at Bioinformatics online.

  8. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  9. Rapid Automated Quantification of Cerebral Leukoaraiosis on CT Images: A Multicenter Validation Study.

    PubMed

    Chen, Liang; Carlton Jones, Anoma Lalani; Mair, Grant; Patel, Rajiv; Gontsarova, Anastasia; Ganesalingam, Jeban; Math, Nikhil; Dawson, Angela; Aweid, Basaam; Cohen, David; Mehta, Amrish; Wardlaw, Joanna; Rueckert, Daniel; Bentley, Paul

    2018-05-15

    Purpose To validate a random forest method for segmenting cerebral white matter lesions (WMLs) on computed tomographic (CT) images in a multicenter cohort of patients with acute ischemic stroke, by comparison with fluid-attenuated recovery (FLAIR) magnetic resonance (MR) images and expert consensus. Materials and Methods A retrospective sample of 1082 acute ischemic stroke cases was obtained that was composed of unselected patients who were treated with thrombolysis or who were undergoing contemporaneous MR imaging and CT, and a subset of International Stroke Thrombolysis-3 trial participants. Automated delineations of WML on images were validated relative to experts' manual tracings on CT images, and co-registered FLAIR MR imaging, and ratings were performed by using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between automated and expert ratings. Results Automated WML volumes correlated strongly with expert-delineated WML volumes at MR imaging and CT (r 2 = 0.85 and 0.71 respectively; P < .001). Spatial-similarity of automated maps, relative to WML MR imaging, was not significantly different to that of expert WML tracings on CT images. Individual expert WML volumes at CT correlated well with each other (r 2 = 0.85), but varied widely (range, 91% of mean estimate; median estimate, 11 mL; range of estimated ranges, 0.2-68 mL). Agreements (κ) between automated ratings and consensus ratings were 0.60 (Wahlund system) and 0.64 (van Swieten system) compared with agreements between individual pairs of experts of 0.51 and 0.67, respectively, for the two rating systems (P < .01 for Wahlund system comparison of agreements). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (P > .05). Automated preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total automated processing time averaged 109 seconds (range, 79-140 seconds). Conclusion An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts. © RSNA, 2018 Online supplemental material is available for this article.

  10. Automating the selection of standard parallels for conic map projections

    NASA Astrophysics Data System (ADS)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  11. An Open Source approach to automated hydrological analysis of ungauged drainage basins in Serbia using R and SAGA

    NASA Astrophysics Data System (ADS)

    Zlatanovic, Nikola; Milovanovic, Irina; Cotric, Jelena

    2014-05-01

    Drainage basins are for the most part ungauged or poorly gauged not only in Serbia but in most parts of the world, usually due to insufficient funds, but also the decommission of river gauges in upland catchments to focus on downstream areas which are more populated. Very often, design discharges are needed for these streams or rivers where no streamflow data is available, for various applications. Examples include river training works for flood protection measures or erosion control, design of culverts, water supply facilities, small hydropower plants etc. The estimation of discharges in ungauged basins is most often performed using rainfall-runoff models, whose parameters heavily rely on geomorphometric attributes of the basin (e.g. catchment area, elevation, slopes of channels and hillslopes etc.). The calculation of these, as well as other paramaters, is most often done in GIS (Geographic Information System) software environments. This study deals with the application of freely available and open source software and datasets for automating rainfall-runoff analysis of ungauged basins using methodologies currently in use hydrological practice. The R programming language was used for scripting and automating the hydrological calculations, coupled with SAGA GIS (System for Automated Geoscientivic Analysis) for geocomputing functions and terrain analysis. Datasets used in the analyses include the freely available SRTM (Shuttle Radar Topography Mission) terrain data, CORINE (Coordination of Information on the Environment) Land Cover data, as well as soil maps and rainfall data. The choice of free and open source software and datasets makes the project ideal for academic and research purposes and cross-platform projects. The geomorphometric module was tested on more than 100 catchments throughout Serbia and compared to manually calculated values (using topographic maps). The discharge estimation module was tested on 21 catchments where data were available and compared to results obtained by frequency analysis of annual maximum discharge. The geomorphometric module of the calculation system showed excellent results, saving a great deal of time that would otherwise have been spent on manual processing of geospatial data. This type of automated analysis presented in this study will enable a much quicker hydrologic analysis on multiple watersheds, providing the platform for further research into spatial variability of runoff.

  12. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  13. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  14. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    NASA Astrophysics Data System (ADS)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  15. Remote sensing applied to land-use studies in Wyoming

    NASA Technical Reports Server (NTRS)

    Breckenridge, R. M.; Marrs, R. W.; Murphy, D. J.

    1973-01-01

    Impending development of Wyoming's vast fuel resources requires a quick and efficient method of land use inventory and evaluation. Preliminary evaluations of ERTS-1 imagery have shown that physiographic and land use inventory maps can be compiled by using a combination of visual and automated interpretation techniques. Test studies in the Powder River Basin showed that ERTS image interpretations can provide much of the needed physiographic and land use information. Water impoundments as small as one acre were detected and water bodies larger than five acres could be mapped and their acreage estimated. Flood plains and irrigated lands were successfully mapped, and some individual crops were identified and mapped. Coniferous and deciduous trees were mapped separately using color additive analysis on the ERTS multispectral imagery. Gross soil distinctions were made with the ERTS imagery, and were found to be closely related to the bedrock geology. Several broad unstable areas were identified. These were related to specific geologic and slope conditions and generally extended through large regions. Some new oil fields and all large open-cut coal mines were mapped. The most difficult task accomplished was that of mapping urban areas. Work in the urban areas provides a striking example of snow enhancement and the detail available from a snow enhanced image.

  16. USGS standard quadrangle maps for emergency response

    USGS Publications Warehouse

    Moore, Laurence R.

    2009-01-01

    The 1:24,000-scale topographic quadrangle was the primary product of the U.S. Geological Survey's (USGS) National Mapping Program from 1947-1992. This map series includes about 54,000 map sheets for the conterminous United States, and is the only uniform map series ever produced that covers this area at such a large scale. This map series partially was revised under several programs, starting as early as 1968, but these programs were not adequate to keep the series current. Through the 1990s the emphasis of the USGS mapping program shifted away from topographic maps and toward more specialized digital data products. Topographic map revision dropped off rapidly after 1999, and stopped completely by 2004. Since 2001, emergency-response and homeland security requirement have revived the question of whether a standard national topographic series is needed. Emergencies such as Hurricane Katrina in 2005 and California wildfires in 2007-08 demonstrated that familiar maps are important to first responders. Maps that have a standard scale, extent, and grids help reduce confusion and save time in emergencies. Traditional maps are designed to allow the human brain to quickly process large amounts of information, and depend on artistic layout and design that cannot be fully automated. In spite of technical advances, creating a traditional, general-purpose topographic map is still expensive. Although the content and layout of traditional topographic maps probably is still desirable, the preferred packaging and delivery of maps has changed. Digital image files are now desired by most users, but to be useful to the emergency-response community, these files must be easy to view and easy to print without specialized geographic information system expertise or software.

  17. Human Factors and Information Operation for a Nuclear Power Space Vehicle

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Brown-VanHoozer, S. Alenka

    2002-01-01

    This paper describes human-interactive systems needed for a crewed nuclear-enabled space mission. A synthesis of aircraft engine and nuclear power plant displays, biofeedback of sensory input, virtual control, brain mapping for control process and manipulation, and so forth are becoming viable solutions. These aspects must maintain the crew's situation awareness and performance, which entails a delicate function allocation between crew and automation.

  18. Automated pupil remapping with binary optics

    DOEpatents

    Neal, Daniel R.; Mansell, Justin

    1999-01-01

    Methods and apparatuses for pupil remapping employing non-standard lenslet shapes in arrays; divergence of lenslet focal spots from on-axis arrangements; use of lenslet arrays to resize two-dimensional inputs to the array; and use of lenslet arrays to map an aperture shape to a different detector shape. Applications include wavefront sensing, astronomical applications, optical interconnects, keylocks, and other binary optics and diffractive optics applications.

  19. Computational Gene Mapping to Analyze Continuous Automated Real-Time Vital Signs Monitoring Data

    DTIC Science & Technology

    2013-09-23

    Analysis as Most Likely to Provide Useful Information at 12 Hours into Care Regarding Eventual Outcome = GOSE at 3 Months……………………………………………………. 8...6 Weeks Post Discharge) GOSE Using the First 12 Hours of Data...Discharge) GOSE Using the First 12 Hours of Data .......................................................................................... 11

  20. Automation of the in vitro micronucleus and chromosome aberration assay for the assessment of the genotoxicity of the particulate and gas-vapor phase of cigarette smoke.

    PubMed

    Roemer, Ewald; Zenzen, Volker; Conroy, Lynda L; Luedemann, Kathrin; Dempsey, Ruth; Schunck, Christian; Sticken, Edgar Trelles

    2015-01-01

    Total particulate matter (TPM) and the gas-vapor phase (GVP) of mainstream smoke from the Reference Cigarette 3R4F were assayed in the cytokinesis-block in vitro micronucleus (MN) assay and the in vitro chromosome aberration (CA) assay, both using V79-4 Chinese hamster lung fibroblasts exposed for up to 24 h. The Metafer image analysis platform was adapted resulting in a fully automated evaluation system of the MN assay for the detection, identification and reporting of cells with micronuclei together with the determination of the cytokinesis-block proliferation index (CBPI) to quantify the treatment-related cytotoxicity. In the CA assay, the same platform was used to identify, map and retrieve metaphases for a subsequent CA evaluation by a trained evaluator. In both the assays, TPM and GVP provoked a significant genotoxic effect: up to 6-fold more micronucleated target cells than in the negative control and up to 10-fold increases in aberrant metaphases. Data variability was lower in the automated version of the MN assay than in the non-automated. It can be estimated that two test substances that differ in their genotoxicity by approximately 30% can statistically be distinguished in the automated MN and CA assays. Time savings, based on man hours, due to the automation were approximately 70% in the MN and 25% in the CA assays. The turn-around time of the evaluation phase could be shortened by 35 and 50%, respectively. Although only cigarette smoke-derived test material has been applied, the technical improvements should be of value for other test substances.

Top