Thematic and positional accuracy assessment of digital remotely sensed data
Russell G. Congalton
2007-01-01
Accuracy assessment or validation has become a standard component of any land cover or vegetation map derived from remotely sensed data. Knowing the accuracy of the map is vital to any decisionmaking performed using that map. The process of assessing the map accuracy is time consuming and expensive. It is very important that the procedure be well thought out and...
30 CFR 75.1200-2 - Accuracy and scale of mine maps.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Accuracy and scale of mine maps. 75.1200-2... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Maps § 75.1200-2 Accuracy and scale of mine maps. (a) The scale of mine maps submitted to the Secretary shall not be less than 100 or...
MapEdit: solution to continuous raster map creation
NASA Astrophysics Data System (ADS)
Rančić, Dejan; Djordjevi-Kajan, Slobodanka
2003-03-01
The paper describes MapEdit, MS Windows TM software for georeferencing and rectification of scanned paper maps. The software produces continuous raster maps which can be used as background in geographical information systems. Process of continuous raster map creation using MapEdit "mosaicking" function is also described as well as the georeferencing and rectification algorithms which are used in MapEdit. Our approach for georeferencing and rectification using four control points and two linear transformations for each scanned map part, together with nearest neighbor resampling method, represents low cost—high speed solution that produce continuous raster maps with satisfactory quality for many purposes (±1 pixel). Quality assessment of several continuous raster maps at different scales that have been created using our software and methodology, has been undertaken and results are presented in the paper. For the quality control of the produced raster maps we referred to three wide adopted standards: US Standard for Digital Cartographic Data, National Standard for Spatial Data Accuracy and US National Map Accuracy Standard. The results obtained during the quality assessment process are given in the paper and show that our maps meat all three standards.
Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.
Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.
1985-01-01
The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors
NASA Astrophysics Data System (ADS)
Peterson, James Preston, II
Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.
Improved Topographic Mapping Through Multi-Baseline SAR Interferometry with MAP Estimation
NASA Astrophysics Data System (ADS)
Dong, Yuting; Jiang, Houjun; Zhang, Lu; Liao, Mingsheng; Shi, Xuguo
2015-05-01
There is an inherent contradiction between the sensitivity of height measurement and the accuracy of phase unwrapping for SAR interferometry (InSAR) over rough terrain. This contradiction can be resolved by multi-baseline InSAR analysis, which exploits multiple phase observations with different normal baselines to improve phase unwrapping accuracy, or even avoid phase unwrapping. In this paper we propose a maximum a posteriori (MAP) estimation method assisted by SRTM DEM data for multi-baseline InSAR topographic mapping. Based on our method, a data processing flow is established and applied in processing multi-baseline ALOS/PALSAR dataset. The accuracy of resultant DEMs is evaluated by using a standard Chinese national DEM of scale 1:10,000 as reference. The results show that multi-baseline InSAR can improve DEM accuracy compared with single-baseline case. It is noteworthy that phase unwrapping is avoided and the quality of multi-baseline InSAR DEM can meet the DTED-2 standard.
NASA Astrophysics Data System (ADS)
Lewis, Donna L.; Phinn, Stuart
2011-01-01
Aerial photography interpretation is the most common mapping technique in the world. However, unlike an algorithm-based classification of satellite imagery, accuracy of aerial photography interpretation generated maps is rarely assessed. Vegetation communities covering an area of 530 km2 on Bullo River Station, Northern Territory, Australia, were mapped using an interpretation of 1:50,000 color aerial photography. Manual stereoscopic line-work was delineated at 1:10,000 and thematic maps generated at 1:25,000 and 1:100,000. Multivariate and intuitive analysis techniques were employed to identify 22 vegetation communities within the study area. The accuracy assessment was based on 50% of a field dataset collected over a 4 year period (2006 to 2009) and the remaining 50% of sites were used for map attribution. The overall accuracy and Kappa coefficient for both thematic maps was 66.67% and 0.63, respectively, calculated from standard error matrices. Our findings highlight the need for appropriate scales of mapping and accuracy assessment of aerial photography interpretation generated vegetation community maps.
Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...
Khan, Wajahat Ali; Khattak, Asad Masood; Hussain, Maqbool; Amin, Muhammad Bilal; Afzal, Muhammad; Nugent, Christopher; Lee, Sungyoung
2014-08-01
Heterogeneity in the management of the complex medical data, obstructs the attainment of data level interoperability among Health Information Systems (HIS). This diversity is dependent on the compliance of HISs with different healthcare standards. Its solution demands a mediation system for the accurate interpretation of data in different heterogeneous formats for achieving data interoperability. We propose an adaptive AdapteR Interoperability ENgine mediation system called ARIEN, that arbitrates between HISs compliant to different healthcare standards for accurate and seamless information exchange to achieve data interoperability. ARIEN stores the semantic mapping information between different standards in the Mediation Bridge Ontology (MBO) using ontology matching techniques. These mappings are provided by our System for Parallel Heterogeneity (SPHeRe) matching system and Personalized-Detailed Clinical Model (P-DCM) approach to guarantee accuracy of mappings. The realization of the effectiveness of the mappings stored in the MBO is evaluation of the accuracy in transformation process among different standard formats. We evaluated our proposed system with the transformation process of medical records between Clinical Document Architecture (CDA) and Virtual Medical Record (vMR) standards. The transformation process achieved over 90 % of accuracy level in conversion process between CDA and vMR standards using pattern oriented approach from the MBO. The proposed mediation system improves the overall communication process between HISs. It provides an accurate and seamless medical information exchange to ensure data interoperability and timely healthcare services to patients.
The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000
Fitzpatrick-Lins, Katherine
1980-01-01
Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.
Cartographic quality of ERTS-1 images
NASA Technical Reports Server (NTRS)
Welch, R. I.
1973-01-01
Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.
NASA Technical Reports Server (NTRS)
Wilson, C.; Dye, R.; Reed, L.
1982-01-01
The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.
Regulations in the field of Geo-Information
NASA Astrophysics Data System (ADS)
Felus, Y.; Keinan, E.; Regev, R.
2013-10-01
The geomatics profession has gone through a major revolution during the last two decades with the emergence of advanced GNSS, GIS and Remote Sensing technologies. These technologies have changed the core principles and working procedures of geomatics professionals. For this reason, surveying and mapping regulations, standards and specifications should be updated to reflect these changes. In Israel, the "Survey Regulations" is the principal document that regulates the professional activities in four key areas geodetic control, mapping, cadastre and Georaphic information systems. Licensed Surveyors and mapping professionals in Israel are required to work according to those regulations. This year a new set of regulations have been published and include a few major amendments as follows: In the Geodesy chapter, horizontal control is officially based on the Israeli network of Continuously Operating GNSS Reference Stations (CORS). The regulations were phrased in a manner that will allow minor datum changes to the CORS stations due to Earth Crustal Movements. Moreover, the regulations permit the use of GNSS for low accuracy height measurements. In the Cadastre chapter, the most critical change is the move to Coordinate Based Cadastre (CBC). Each parcel corner point is ranked according to its quality (accuracy and clarity of definition). The highest ranking for a parcel corner is 1. A point with a rank of 1 is defined by its coordinates alone. Any other contradicting evidence is inferior to the coordinates values. Cadastral Information is stored and managed via the National Cadastral Databases. In the Mapping and GIS chapter; the traditional paper maps (ranked by scale) are replaced by digital maps or spatial databases. These spatial databases are ranked by their quality level. Quality level is determined (similar to the ISO19157 Standard) by logical consistency, completeness, positional accuracy, attribute accuracy, temporal accuracy and usability. Metadata is another critical component of any spatial database. Every component in a map should have a metadata identification, even if the map was compiled from multiple resources. The regulations permit the use of advanced sensors and mapping techniques including LIDAR and digita l cameras that have been certified and meet the defined criteria. The article reviews these new regulations and the decision that led to them.
Scoping of Flood Hazard Mapping Needs for Merrimack County, New Hampshire
2006-01-01
DOQ Digital Orthophoto Quadrangle DOQQ Digital Ortho Quarter Quadrangle DTM Digital Terrain Model FBFM Flood Boundary and Floodway Map FEMA Federal...discussed available data and coverages within New Hampshire (for example, 2003 National Agriculture Imag- ery Program (NAIP) color Digital Orthophoto ... orthophotos providing improved base map accuracy. NH GRANIT is presently converting the standard, paper FIRMs and Flood Boundary and Floodway maps (FBFMs
Neuhaus, Philipp; Doods, Justin; Dugas, Martin
2015-01-01
Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.
von Bary, Christian; Fredersdorf-Hahn, Sabine; Heinicke, Norbert; Jungbauer, Carsten; Schmid, Peter; Riegger, Günter A; Weber, Stefan
2011-08-01
Recently, new catheter technologies have been developed for atrial fibrillation (AF) ablation. We investigate the diagnostic accuracy of a circular mapping and pulmonary vein ablation catheter (PVAC) compared with a standard circular mapping catheter (Orbiter) and the influence of filter settings on signal quality. After reconstruction of the left atrium by three-dimensional atriography, baseline PV potentials (PVP) were recorded consecutively with PVAC and Orbiter in 20 patients with paroxysmal AF. PVPs were compared and attributed to predefined anatomical PV segments. Ablation was performed in 80 PVs using the PVAC. If isolation of the PVs was assumed, signal assessment of each PV was repeated with the Orbiter. If residual PV potentials could be uncovered, different filter settings were tested to improve mapping quality of the PVAC. Ablation was continued until complete PV isolation (PVI) was confirmed with the Orbiter. Baseline mapping demonstrated a good correlation between the Orbiter and PVAC. Mapping accuracy using the PVAC for mapping and ablation was 94% (74 of 79 PVs). Additional mapping with the Orbiter improved the PV isolation rate to 99%. Adjustment of filter settings failed to improve quality of the PV signals compared with standard filter settings. Using the PVAC as a stand-alone strategy for mapping and ablation, one should be aware that in some cases, different signal morphology mimics PVI isolation. Adjustment of filter settings failed to improve signal quality. The use of an additional mapping catheter is recommended to become familiar with the particular signal morphology during the first PVAC cases or whenever there is a doubt about successful isolation of the pulmonary veins.
Vegetation classification and distribution mapping report Mesa Verde National Park
Thomas, Kathryn A.; McTeague, Monica L.; Ogden, Lindsay; Floyd, M. Lisa; Schulz, Keith; Friesen, Beverly A.; Fancher, Tammy; Waltermire, Robert G.; Cully, Anne
2009-01-01
The classification and distribution mapping of the vegetation of Mesa Verde National Park (MEVE) and surrounding environment was achieved through a multi-agency effort between 2004 and 2007. The National Park Service’s Southern Colorado Plateau Network facilitated the team that conducted the work, which comprised the U.S. Geological Survey’s Southwest Biological Science Center, Fort Collins Research Center, and Rocky Mountain Geographic Science Center; Northern Arizona University; Prescott College; and NatureServe. The project team described 47 plant communities for MEVE, 34 of which were described from quantitative classification based on f eld-relevé data collected in 1993 and 2004. The team derived 13 additional plant communities from field observations during the photointerpretation phase of the project. The National Vegetation Classification Standard served as a framework for classifying these plant communities to the alliance and association level. Eleven of the 47 plant communities were classified as “park specials;” that is, plant communities with insufficient data to describe them as new alliances or associations. The project team also developed a spatial vegetation map database representing MEVE, with three different map-class schemas: base, group, and management map classes. The base map classes represent the fi nest level of spatial detail. Initial polygons were developed using Definiens Professional (at the time of our use, this software was called eCognition), assisted by interpretation of 1:12,000 true-color digital orthophoto quarter quadrangles (DOQQs). These polygons (base map classes) were labeled using manual photo interpretation of the DOQQs and 1:12,000 true-color aerial photography. Field visits verified interpretation concepts. The vegetation map database includes 46 base map classes, which consist of associations, alliances, and park specials classified with quantitative analysis, additional associations and park specials noted during photointerpretation, and non-vegetated land cover, such as infrastructure, land use, and geological land cover. The base map classes consist of 5,007 polygons in the project area. A field-based accuracy assessment of the base map classes showed overall accuracy to be 43.5%. Seven map classes comprise 89.1% of the park vegetated land cover. The group map classes represent aggregations of the base map classes, approximating the group level of the National Vegetation Classification Standard, version 2 (Federal Geographic Data Committee 2007), and reflecting physiognomy and floristics. Terrestrial ecological systems, as described by NatureServe (Comer et al. 2003), were used as the fi rst approximation of the group level. The project team identified 14 group map classes for this project. The overall accuracy of the group map classes was determined using the same accuracy assessment data as for the base map classes. The overall accuracy of the group representation of vegetation was 80.3%. In consultation with park staff , the team developed management map classes, consisting of park-defined groupings of base map classes intended to represent a balance between maintaining required accuracy and providing a focus on vegetation of particular interest or import to park managers. The 23 management map classes had an overall accuracy of 73.3%. While the main products of this project are the vegetation classification and the vegetation map database, a number of ancillary digital geographic information system and database products were also produced that can be used independently or to augment the main products. These products include shapefiles of the locations of field-collected data and relational databases of field-collected data.
Turkers in Africa: A Crowdsourcing Approach to Improving Agricultural Landcover Maps
NASA Astrophysics Data System (ADS)
Estes, L. D.; Caylor, K. K.; Choi, J.
2012-12-01
In the coming decades a substantial portion of Africa is expected to be transformed to agriculture. The scale of this conversion may match or exceed that which occurred in the Brazilian Cerrado and Argentinian Pampa in recent years. Tracking the rate and extent of this conversion will depend on having an accurate baseline of the current extent of croplands. Continent-wide baseline data do exist, but the accuracy of these relatively coarse resolution, remotely sensed assessments is suspect in many regions. To develop more accurate maps of the distribution and nature of African croplands, we develop a distributed "crowdsourcing" approach that harnesses human eyeballs and image interpretation capabilities. Our initial goal is to assess the accuracy of existing agricultural land cover maps, but ultimately we aim to generate "wall-to-wall" cropland maps that can be revisited and updated to track agricultural transformation. Our approach utilizes the freely avail- able, high-resolution satellite imagery provided by Google Earth, combined with Amazon.com's Mechanical Turk platform, an online service that provides a large, global pool of workers (known as "Turkers") who perform "Human Intelligence Tasks" (HITs) for a fee. Using open-source R and python software, we select a random sample of 1 km2 cells from a grid placed over our study area, stratified by field density classes drawn from one of the coarse-scale land cover maps, and send these in batches to Mechanical Turk for processing. Each Turker is required to conduct an initial training session, on the basis of which they are assigned an accuracy score that determines whether the Turker is allowed to proceed with mapping tasks. Completed mapping tasks are automatically retrieved and processed on our server, and subject to two further quality control measures. The first of these is a measure of the spatial accuracy of Turker mapped areas compared to a "gold standard" maps from selected locations that are randomly inserted (at relatively low frequency, ˜1/100) into batches sent to Mechanical Turk. This check provides a measure of overall map accuracy, and is used to update individual Turker's accuracy scores, which is the basis for determining pay rates. The second measure compares the area of each mapped Turkers' results with the expected area derived from existing land cover data, accepting or rejecting each Turker's batch based on how closely the two distributions match, with accuracy scores adjusted accordingly. Those two checks balance the need to ensure mapping quality with the overall cost of the project. Our initial study is developed for South Africa, where an existing dataset of hand digitized fields commissioned by the South African Department of Agriculture provides our validation and gold standard data. We compare our Turker-produced results with these existing maps, and with the the coarser-scaled land cover datasets, providing insight into their relative accuracies, classified according to cropland type (e.g. small-scale/subsistence cropping; large-scale commercial farms), and provide information on the cost effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Sisay, Z. G.; Besha, T.; Gessesse, B.
2017-05-01
This study used in-situ GPS data to validate the accuracy of horizontal coordinates and orientation of linear features of orthophoto and line map for Bahir Dar city. GPS data is processed using GAMIT/GLOBK and Lieca GeoOfice (LGO) in a least square sense with a tie to local and regional GPS reference stations to predict horizontal coordinates at five checkpoints. Real-Time-Kinematic GPS measurement technique is used to collect the coordinates of road centerline to test the accuracy associated with the orientation of the photogrammetric line map. The accuracy of orthophoto was evaluated by comparing with in-situ GPS coordinates and it is in a good agreement with a root mean square error (RMSE) of 12.45 cm in x- and 13.97 cm in y-coordinates, on the other hand, 6.06 cm with 95 % confidence level - GPS coordinates from GAMIT/GLOBK. Whereas, the horizontal coordinates of the orthophoto are in agreement with in-situ GPS coordinates at an accuracy of 16.71 cm and 18.98 cm in x and y-directions respectively and 11.07 cm with 95 % confidence level - GPS data is processed by LGO and a tie to local GPS network. Similarly, the accuracy of linear feature is in a good fit with in-situ GPS measurement. The GPS coordinates of the road centerline deviates from the corresponding coordinates of line map by a mean value of 9.18 cm in x- direction and -14.96 cm in y-direction. Therefore, it can be concluded that, the accuracy of the orthophoto and line map is within the national standard of error budget ( 25 cm).
Wilson, Gary L.; Richards, Joseph M.
2006-01-01
Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.
Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat
2017-09-01
Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.
Influence of neighbourhood information on 'Local Climate Zone' mapping in heterogeneous cities
NASA Astrophysics Data System (ADS)
Verdonck, Marie-Leen; Okujeni, Akpona; van der Linden, Sebastian; Demuzere, Matthias; De Wulf, Robert; Van Coillie, Frieke
2017-10-01
Local climate zone (LCZ) mapping is an emerging field in urban climate research. LCZs potentially provide an objective framework to assess urban form and function worldwide. The scheme is currently being used to globally map LCZs as a part of the World Urban Database and Access Portal Tools (WUDAPT) initiative. So far, most of the LCZ maps lack proper quantitative assessment, challenging the generic character of the WUDAPT workflow. Using the standard method introduced by the WUDAPT community difficulties arose concerning the built zones due to high levels of heterogeneity. To overcome this problem a contextual classifier is adopted in the mapping process. This paper quantitatively assesses the influence of neighbourhood information on the LCZ mapping result of three cities in Belgium: Antwerp, Brussels and Ghent. Overall accuracies for the maps were respectively 85.7 ± 0.5, 79.6 ± 0.9, 90.2 ± 0.4%. The approach presented here results in overall accuracies of 93.6 ± 0.2, 92.6 ± 0.3 and 95.6 ± 0.3% for Antwerp, Brussels and Ghent. The results thus indicate a positive influence of neighbourhood information for all study areas with an increase in overall accuracies of 7.9, 13.0 and 5.4%. This paper reaches two main conclusions. Firstly, evidence was introduced on the relevance of a quantitative accuracy assessment in LCZ mapping, showing that the accuracies reported in previous papers are not easily achieved. Secondly, the method presented in this paper proves to be highly effective in Belgian cities, and given its open character shows promise for application in other heterogeneous cities worldwide.
Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, S.; Peter, M.
2017-05-01
In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.
Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)
Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne
2017-01-01
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other land cover change accuracy assessments.
Ralston, Barbara E.; Davis, Philip A.; Weber, Robert M.; Rundall, Jill M.
2008-01-01
A vegetation database of the riparian vegetation located within the Colorado River ecosystem (CRE), a subsection of the Colorado River between Glen Canyon Dam and the western boundary of Grand Canyon National Park, was constructed using four-band image mosaics acquired in May 2002. A digital line scanner was flown over the Colorado River corridor in Arizona by ISTAR Americas, using a Leica ADS-40 digital camera to acquire a digital surface model and four-band image mosaics (blue, green, red, and near-infrared) for vegetation mapping. The primary objective of this mapping project was to develop a digital inventory map of vegetation to enable patch- and landscape-scale change detection, and to establish randomized sampling points for ground surveys of terrestrial fauna (principally, but not exclusively, birds). The vegetation base map was constructed through a combination of ground surveys to identify vegetation classes, image processing, and automated supervised classification procedures. Analysis of the imagery and subsequent supervised classification involved multiple steps to evaluate band quality, band ratios, and vegetation texture and density. Identification of vegetation classes involved collection of cover data throughout the river corridor and subsequent analysis using two-way indicator species analysis (TWINSPAN). Vegetation was classified into six vegetation classes, following the National Vegetation Classification Standard, based on cover dominance. This analysis indicated that total area covered by all vegetation within the CRE was 3,346 ha. Considering the six vegetation classes, the sparse shrub (SS) class accounted for the greatest amount of vegetation (627 ha) followed by Pluchea (PLSE) and Tamarix (TARA) at 494 and 366 ha, respectively. The wetland (WTLD) and Prosopis-Acacia (PRGL) classes both had similar areal cover values (227 and 213 ha, respectively). Baccharis-Salix (BAXX) was the least represented at 94 ha. Accuracy assessment of the supervised classification determined that accuracies varied among vegetation classes from 90% to 49%. Causes for low accuracies were similar spectral signatures among vegetation classes. Fuzzy accuracy assessment improved classification accuracies such that Federal mapping standards of 80% accuracies for all classes were met. The scale used to quantify vegetation adequately meets the needs of the stakeholder group. Increasing the scale to meet the U.S. Geological Survey (USGS)-National Park Service (NPS)National Mapping Program's minimum mapping unit of 0.5 ha is unwarranted because this scale would reduce the resolution of some classes (e.g., seep willow/coyote willow would likely be combined with tamarisk). While this would undoubtedly improve classification accuracies, it would not provide the community-level information about vegetation change that would benefit stakeholders. The identification of vegetation classes should follow NPS mapping approaches to complement the national effort and should incorporate the alternative analysis for community identification that is being incorporated into newer NPS mapping efforts. National Vegetation Classification is followed in this report for association- to formation-level categories. Accuracies could be improved by including more environmental variables such as stage elevation in the classification process and incorporating object-based classification methods. Another approach that may address the heterogeneous species issue and classification is to use spectral mixing analysis to estimate the fractional cover of species within each pixel and better quantify the cover of individual species that compose a cover class. Varying flights to capture vegetation at different times of the year might also help separate some vegetation classes, though the cost may be prohibitive. Lastly, photointerpretation instead of automated mapping could be tried. Photointerpretation would likely not improve accuracies in this case, howev
Thematic Accuracy Assessment of the 2011 National Land ...
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l
Operational shoreline mapping with high spatial resolution radar and geographic processing
Rangoonwala, Amina; Jones, Cathleen E; Chi, Zhaohui; Ramsey, Elijah W.
2017-01-01
A comprehensive mapping technology was developed utilizing standard image processing and available GIS procedures to automate shoreline identification and mapping from 2 m synthetic aperture radar (SAR) HH amplitude data. The development used four NASA Uninhabited Aerial Vehicle SAR (UAVSAR) data collections between summer 2009 and 2012 and a fall 2012 collection of wetlands dominantly fronted by vegetated shorelines along the Mississippi River Delta that are beset by severe storms, toxic releases, and relative sea-level rise. In comparison to shorelines interpreted from 0.3 m and 1 m orthophotography, the automated GIS 10 m alongshore sampling found SAR shoreline mapping accuracy to be ±2 m, well within the lower range of reported shoreline mapping accuracies. The high comparability was obtained even though water levels differed between the SAR and photography image pairs and included all shorelines regardless of complexity. The SAR mapping technology is highly repeatable and extendable to other SAR instruments with similar operational functionality.
Accuracy of MRI-based Magnetic Susceptibility Measurements
NASA Astrophysics Data System (ADS)
Russek, Stephen; Erdevig, Hannah; Keenan, Kathryn; Stupic, Karl
Magnetic Resonance Imaging (MRI) is increasingly used to map tissue susceptibility to identify microbleeds associated with brain injury and pathologic iron deposits associated with neurologic diseases such as Parkinson's and Alzheimer's disease. Field distortions with a resolution of a few parts per billion can be measured using MRI phase maps. The field distortion map can be inverted to obtain a quantitative susceptibility map. To determine the accuracy of MRI-based susceptibility measurements, a set of phantoms with paramagnetic salts and nano-iron gels were fabricated. The shapes and orientations of features were varied. Measured susceptibility of 1.0 mM GdCl3 solution in water as a function of temperature agreed well with the theoretical predictions, assuming Gd+3 is spin 7/2. The MRI susceptibility measurements were compared with SQUID magnetometry. The paramagnetic susceptibility sits on top of the much larger diamagnetic susceptibility of water (-9.04 x 10-6), which leads to errors in the SQUID measurements. To extract out the paramagnetic contribution using standard magnetometry, measurements must be made down to low temperature (2K). MRI-based susceptometry is shown to be as or more accurate than standard magnetometry and susceptometry techniques.
Ground control requirements for precision processing of ERTS images
Burger, Thomas C.
1973-01-01
With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
Feasibility of a GNSS-Probe for Creating Digital Maps of High Accuracy and Integrity
NASA Astrophysics Data System (ADS)
Vartziotis, Dimitris; Poulis, Alkis; Minogiannis, Alexandros; Siozos, Panayiotis; Goudas, Iraklis; Samson, Jaron; Tossaint, Michel
The “ROADSCANNER” project addresses the need for increased accuracy and integrity Digital Maps (DM) utilizing the latest developments in GNSS, in order to provide the required datasets for novel applications, such as navigation based Safety Applications, Advanced Driver Assistance Systems (ADAS) and Digital Automotive Simulations. The activity covered in the current paper is the feasibility study, preliminary tests, initial product design and development plan for an EGNOS enabled vehicle probe. The vehicle probe will be used for generating high accuracy, high integrity and ADAS compatible digital maps of roads, employing a multiple passes methodology supported by sophisticated refinement algorithms. Furthermore, the vehicle probe will be equipped with pavement scanning and other data fusion equipment, in order to produce 3D road surface models compatible with standards of road-tire simulation applications. The project was assigned to NIKI Ltd under the 1st Call for Ideas in the frame of the ESA - Greece Task Force.
The National Map seamless digital elevation model specifications
Archuleta, Christy-Ann M.; Constance, Eric W.; Arundel, Samantha T.; Lowe, Amanda J.; Mantey, Kimberly S.; Phillips, Lori A.
2017-08-02
This specification documents the requirements and standards used to produce the seamless elevation layers for The National Map of the United States. Seamless elevation data are available for the conterminous United States, Hawaii, Alaska, and the U.S. territories, in three different resolutions—1/3-arc-second, 1-arc-second, and 2-arc-second. These specifications include requirements and standards information about source data requirements, spatial reference system, distribution tiling schemes, horizontal resolution, vertical accuracy, digital elevation model surface treatment, georeferencing, data source and tile dates, distribution and supporting file formats, void areas, metadata, spatial metadata, and quality assurance and control.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS
Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...
Toward digital geologic map standards: a progress report
Ulrech, George E.; Reynolds, Mitchell W.; Taylor, Richard B.
1992-01-01
Establishing modern scientific and technical standards for geologic maps and their derivative map products is vital to both producers and users of such maps as we move into an age of digital cartography. Application of earth-science data in complex geographic information systems, acceleration of geologic map production, and reduction of population costs require that national standards be developed for digital geologic cartography and computer analysis. Since December 1988, under commission of the Chief Geologic of the U.S. Geological Survey and the mandate of the National Geologic Mapping Program (with added representation from the Association of American State Geologists), a committee has been designing a comprehensive set of scientific map standards. Three primary issues were: (1) selecting scientific symbology and its digital representation; (2) creating an appropriate digital coding system that characterizes geologic features with respect to their physical properties, stratigraphic and structural relations, spatial orientation, and interpreted mode of origin; and (3) developing mechanisms for reporting levels of certainty for descriptive as well as measured properties. Approximately 650 symbols for geoscience maps, including present usage of the U.S Geological Survey, state geological surveys, industry, and academia have been identified and tentatively adopted. A proposed coding system comprises four-character groupings of major and minor codes that can identify all attributes of a geologic feature. Such a coding system allows unique identification of as many as 105 geologic names and values on a given map. The new standard will track closely the latest developments of the Proposed Standard for Digital Cartographic Data soon to be submitted to the National Institute of Standards and Technology by the Federal Interagency Coordinating Committee on Digital Cartography. This standard will adhere generally to the accepted definitions and specifications for spatial data transfer. It will require separate specifications of digital cartographic quality relating to positional accuracy and ranges of measured and interpreted values such as geologic age and rock composition. Provisional digital geologic map standards will be published for trial implementation. After approximately two years, when comments on the proposed standards have been solicited and modifications made, formal adoption of the standards will be recommended. Widespread acceptance of the new standards will depend on their applicability to the broadest range of earth-science map products and their adaptability to changing cartographic technology.
Fuller, Douglas O; Parenti, Michael S; Gad, Adel M; Beier, John C
2012-01-01
Irrigation along the Nile River has resulted in dramatic changes in the biophysical environment of Upper Egypt. In this study we used a combination of MODIS 250 m NDVI data and Landsat imagery to identify areas that changed from 2001-2008 as a result of irrigation and water-level fluctuations in the Nile River and nearby water bodies. We used two different methods of time series analysis -- principal components (PCA) and harmonic decomposition (HD), applied to the MODIS 250 m NDVI images to derive simple three-class land cover maps and then assessed their accuracy using a set of reference polygons derived from 30 m Landsat 5 and 7 imagery. We analyzed our MODIS 250 m maps against a new MODIS global land cover product (MOD12Q1 collection 5) to assess whether regionally specific mapping approaches are superior to a standard global product. Results showed that the accuracy of the PCA-based product was greater than the accuracy of either the HD or MOD12Q1 products for the years 2001, 2003, and 2008. However, the accuracy of the PCA product was only slightly better than the MOD12Q1 for 2001 and 2003. Overall, the results suggest that our PCA-based approach produces a high level of user and producer accuracies, although the MOD12Q1 product also showed consistently high accuracy. Overlay of 2001-2008 PCA-based maps showed a net increase of 12 129 ha of irrigated vegetation, with the largest increase found from 2006-2008 around the Districts of Edfu and Kom Ombo. This result was unexpected in light of ambitious government plans to develop 336 000 ha of irrigated agriculture around the Toshka Lakes.
NASA Astrophysics Data System (ADS)
Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia
2017-09-01
Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.
1990-02-15
electrical activity mapping procedures. It is necessary to employ approximately 20 electrodes to conduct full- scale brain mapping procedures, using a...animal groups, likewise, showed no observable differences in the animal’s exploratory behavior, nuzzle response, lid-corneal and ear reflexes, pain ...SPECIFICATIONS FOR THE ENVIRONICS SERIES 100 GAS STANDARDS GENERATOR Accuracy of Flow 0.15 % of Full Scale Linearity 0.15 % of Full Scale Repeatability 0.10
FGDC Digital Cartographic Standard for Geologic Map Symbolization (PostScript Implementation)
,
2006-01-01
PLEASE NOTE: This now-approved 'FGDC Digital Cartographic Standard for Geologic Map Symbolization (PostScript Implementation)' officially supercedes its earlier (2000) Public Review Draft version (see 'Earlier Versions of the Standard' below). In August 2006, the Digital Cartographic Standard for Geologic Map Symbolization was officially endorsed by the Federal Geographic Data Committee (FGDC) as the national standard for the digital cartographic representation of geologic map features (FGDC Document Number FGDC-STD-013-2006). Presented herein is the PostScript Implementation of the standard, which will enable users to directly apply the symbols in the standard to geologic maps and illustrations prepared in desktop illustration and (or) publishing software. The FGDC Digital Cartographic Standard for Geologic Map Symbolization contains descriptions, examples, cartographic specifications, and notes on usage for a wide variety of symbols that may be used on typical, general-purpose geologic maps and related products such as cross sections. The standard also can be used for different kinds of special-purpose or derivative map products and databases that may be focused on a specific geoscience topic (for example, slope stability) or class of features (for example, a fault map). The standard is scale-independent, meaning that the symbols are appropriate for use with geologic mapping compiled or published at any scale. It will be useful to anyone who either produces or uses geologic map information, whether in analog or digital form. Please be aware that this standard is not intended to be used inflexibly or in a manner that will limit one's ability to communicate the observations and interpretations gained from geologic mapping. In certain situations, a symbol or its usage might need to be modified in order to better represent a particular feature on a geologic map or cross section. This standard allows the use of any symbol that doesn't conflict with others in the standard, provided that it is clearly explained on the map and in the database. In addition, modifying the size, color, and (or) lineweight of an existing symbol to suit the needs of a particular map or output device also is permitted, provided that the modified symbol's appearance is not too similar to another symbol on the map. Be aware, however, that reducing lineweights below .125 mm (.005 inch) may cause symbols to plot incorrectly if output at higher resolutions (1800 dpi or higher). For guidelines on symbol usage, as well as on color design and map labeling, please refer to the standard's introductory text. Also found there are informational sections covering concepts of geologic mapping and some definitions of geologic map features, as well as sections on the newly defined concepts and terminology for the scientific confidence and locational accuracy of geologic map features. More information on both the past development and the future maintenance of the FGDC Digital Cartographic Standard for Geologic Map Symbolization can be found at the FGDC Geologic Data Subcommittee website (http://ngmdb.usgs.gov/fgdc_gds/). Earlier Versions of the Standard
Accuracy of magnetic resonance based susceptibility measurements
NASA Astrophysics Data System (ADS)
Erdevig, Hannah E.; Russek, Stephen E.; Carnicka, Slavka; Stupic, Karl F.; Keenan, Kathryn E.
2017-05-01
Magnetic Resonance Imaging (MRI) is increasingly used to map the magnetic susceptibility of tissue to identify cerebral microbleeds associated with traumatic brain injury and pathological iron deposits associated with neurodegenerative diseases such as Parkinson's and Alzheimer's disease. Accurate measurements of susceptibility are important for determining oxygen and iron content in blood vessels and brain tissue for use in noninvasive clinical diagnosis and treatment assessments. Induced magnetic fields with amplitude on the order of 100 nT, can be detected using MRI phase images. The induced field distributions can then be inverted to obtain quantitative susceptibility maps. The focus of this research was to determine the accuracy of MRI-based susceptibility measurements using simple phantom geometries and to compare the susceptibility measurements with magnetometry measurements where SI-traceable standards are available. The susceptibilities of paramagnetic salt solutions in cylindrical containers were measured as a function of orientation relative to the static MRI field. The observed induced fields as a function of orientation of the cylinder were in good agreement with simple models. The MRI susceptibility measurements were compared with SQUID magnetometry using NIST-traceable standards. MRI can accurately measure relative magnetic susceptibilities while SQUID magnetometry measures absolute magnetic susceptibility. Given the accuracy of moment measurements of tissue mimicking samples, and the need to look at small differences in tissue properties, the use of existing NIST standard reference materials to calibrate MRI reference structures is problematic and better reference materials are required.
Hüttich, Christian; Herold, Martin; Strohbach, Ben J; Dech, Stefan
2011-05-01
Integrated ecosystem assessment initiatives are important steps towards a global biodiversity observing system. Reliable earth observation data are key information for tracking biodiversity change on various scales. Regarding the establishment of standardized environmental observation systems, a key question is: What can be observed on each scale and how can land cover information be transferred? In this study, a land cover map from a dry semi-arid savanna ecosystem in Namibia was obtained based on the UN LCCS, in-situ data, and MODIS and Landsat satellite imagery. In situ botanical relevé samples were used as baseline data for the definition of a standardized LCCS legend. A standard LCCS code for savanna vegetation types is introduced. An object-oriented segmentation of Landsat imagery was used as intermediate stage for downscaling in-situ training data on a coarse MODIS resolution. MODIS time series metrics of the growing season 2004/2005 were used to classify Kalahari vegetation types using a tree-based ensemble classifier (Random Forest). The prevailing Kalahari vegetation types based on LCCS was open broadleaved deciduous shrubland with an herbaceous layer which differs from the class assignments of the global and regional land-cover maps. The separability analysis based on Bhattacharya distance measurements applied on two LCCS levels indicated a relationship of spectral mapping dependencies of annual MODIS time series features due to the thematic detail of the classification scheme. The analysis of LCCS classifiers showed an increased significance of life-form composition and soil conditions to the mapping accuracy. An overall accuracy of 92.48% was achieved. Woody plant associations proved to be most stable due to small omission and commission errors. The case study comprised a first suitability assessment of the LCCS classifier approach for a southern African savanna ecosystem.
Establishment of Hydrographic Shore Control by Doppler Satellite Techniques.
1984-06-01
entered in 8116,h 20. if different tromn Report) 10.SPAccuNTRaY NSdrs AHOacurcystndrd, raslcaio, IS. AEY WRDC (Continue en roer@e side it necessary And...the Defense Mapping Agency, Hydrographic-Topographlc Center (DMA-HTC); the ephemerides are computed and distributed by the DMA-HTC [Ref. 3J. The...all,_ C: En m zz E-4~E- 0 .4 0 = 0 z 4 .4 z 4 c -4 4 1 0j 0 heU 7 60 VIII. ACCURACY STANDARDS AND SPECIFICATIONS A. CURRENT ACCURACY
Communications among elements of a space construction ensemble
NASA Technical Reports Server (NTRS)
Davis, Randal L.; Grasso, Christopher A.
1989-01-01
Space construction projects will require careful coordination between managers, designers, manufacturers, operators, astronauts, and robots with large volumes of information of varying resolution, timeliness, and accuracy flowing between the distributed participants over computer communications networks. Within the CSC Operations Branch, we are researching the requirements and options for such communications. Based on our work to date, we feel that communications standards being developed by the International Standards Organization, the CCITT, and other groups can be applied to space construction. We are currently studying in depth how such standards can be used to communicate with robots and automated construction equipment used in a space project. Specifically, we are looking at how the Manufacturing Automation Protocol (MAP) and the Manufacturing Message Specification (MMS), which tie together computers and machines in automated factories, might be applied to space construction projects. Together with our CSC industrial partner Computer Technology Associates, we are developing a MAP/MMS companion standard for space construction and we will produce software to allow the MAP/MMS protocol to be used in our CSC operations testbed.
Optimization of Brain T2 Mapping Using Standard CPMG Sequence In A Clinical Scanner
NASA Astrophysics Data System (ADS)
Hnilicová, P.; Bittšanský, M.; Dobrota, D.
2014-04-01
In magnetic resonance imaging, transverse relaxation time (T2) mapping is a useful quantitative tool enabling enhanced diagnostics of many brain pathologies. The aim of our study was to test the influence of different sequence parameters on calculated T2 values, including multi-slice measurements, slice position, interslice gap, echo spacing, and pulse duration. Measurements were performed using standard multi-slice multi-echo CPMG imaging sequence on a 1.5 Tesla routine whole body MR scanner. We used multiple phantoms with different agarose concentrations (0 % to 4 %) and verified the results on a healthy volunteer. It appeared that neither the pulse duration, the size of interslice gap nor the slice shift had any impact on the T2. The measurement accuracy was increased with shorter echo spacing. Standard multi-slice multi-echo CPMG protocol with the shortest echo spacing, also the smallest available interslice gap (100 % of slice thickness) and shorter pulse duration was found to be optimal and reliable for calculating T2 maps in the human brain.
Mileto, Achille; Allen, Brian C; Pietryga, Jason A; Farjat, Alfredo E; Zarzour, Jessica G; Bellini, Davide; Ebner, Lukas; Morgan, Desiree E
2017-10-01
The purpose of this study was to assess the diagnostic accuracy of effective atomic number maps reconstructed from dual-energy contrast-enhanced data for discriminating between nonenhancing renal cysts and enhancing masses. Two hundred six patients (128 men, 78 women; mean age, 64 years) underwent a CT renal mass protocol (single-energy unenhanced and dual-energy contrast-enhanced nephrographic imaging) at two different hospitals. For each set of patients, two blinded, independent observers performed measurements on effective atomic number maps from contrast-enhanced dual-energy data. Renal mass assessment on unenhanced and nephrographic images, corroborated by imaging and medical records, was the reference standard. The diagnostic accuracy of effective atomic number maps was assessed with ROC analysis. Significant differences in mean effective atomic numbers (Z eff ) were observed between nonenhancing and enhancing masses (set A, 8.19 vs 9.59 Z eff ; set B, 8.05 vs 9.19 Z eff ; sets combined, 8.13 vs 9.37 Z eff ) (p < 0.0001). An effective atomic number value of 8.36 Z eff was the optimal threshold, rendering an AUC of 0.92 (95% CI, 0.89-0.94), sensitivity of 90.8% (158/174 [95% CI, 85.5-94.7%]), specificity of 85.2% (445/522 [95% CI, 81.9-88.2%]), and overall diagnostic accuracy of 86.6% (603/696 [95% CI, 83.9-89.1%]). Nonenhancing renal cysts, including hyperattenuating cysts, can be discriminated from enhancing masses on effective atomic number maps generated from dual-energy contrast-enhanced CT data. This technique may be of clinical usefulness when a CT protocol for comprehensive assessment of renal masses is not available.
Fast group matching for MR fingerprinting reconstruction.
Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L
2015-08-01
MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.
Automated high resolution mapping of coffee in Rwanda using an expert Bayesian network
NASA Astrophysics Data System (ADS)
Mukashema, A.; Veldkamp, A.; Vrieling, A.
2014-12-01
African highland agro-ecosystems are dominated by small-scale agricultural fields that often contain a mix of annual and perennial crops. This makes such systems difficult to map by remote sensing. We developed an expert Bayesian network model to extract the small-scale coffee fields of Rwanda from very high resolution data. The model was subsequently applied to aerial orthophotos covering more than 99% of Rwanda and on one QuickBird image for the remaining part. The method consists of a stepwise adjustment of pixel probabilities, which incorporates expert knowledge on size of coffee trees and fields, and on their location. The initial naive Bayesian network, which is a spectral-based classification, yielded a coffee map with an overall accuracy of around 50%. This confirms that standard spectral variables alone cannot accurately identify coffee fields from high resolution images. The combination of spectral and ancillary data (DEM and a forest map) allowed mapping of coffee fields and associated uncertainties with an overall accuracy of 87%. Aggregated to district units, the mapped coffee areas demonstrated a high correlation with the coffee areas reported in the detailed national coffee census of 2009 (R2 = 0.92). Unlike the census data our map provides high spatial resolution of coffee area patterns of Rwanda. The proposed method has potential for mapping other perennial small scale cropping systems in the East African Highlands and elsewhere.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
A computational linguistics motivated mapping of ICPC-2 PLUS to SNOMED CT.
Wang, Yefeng; Patrick, Jon; Miller, Graeme; O'Hallaran, Julie
2008-10-27
A great challenge in sharing data across information systems in general practice is the lack of interoperability between different terminologies or coding schema used in the information systems. Mapping of medical vocabularies to a standardised terminology is needed to solve data interoperability problems. We present a system to automatically map an interface terminology ICPC-2 PLUS to SNOMED CT. Three steps of mapping are proposed in this system. The UMLS metathesaurus mapping utilises explicit relationships between ICPC-2 PLUS and SNOMED CT terms in the UMLS library to perform the first stage of the mapping. Computational linguistic mapping uses natural language processing techniques and lexical similarities for the second stage of mapping between terminologies. Finally, the post-coordination mapping allows one ICPC-2 PLUS term to be mapped into an aggregation of two or more SNOMED CT terms. A total 5,971 of all 7,410 ICPC-2 terms (80.58%) were mapped to SNOMED CT using the three stages but with different levels of accuracy. UMLS mapping achieved the mapping of 53.0% ICPC2 PLUS terms to SNOMED CT with the precision rate of 96.46% and overall recall rate of 44.89%. Lexical mapping increased the result to 60.31% and post-coordination mapping gave an increase of 20.27% in mapped terms. A manual review of a part of the mapping shows that the precision of lexical mappings is around 90%. The accuracy of post-coordination has not been evaluated yet. Unmapped terms and mismatched terms are due to the differences in the structures between ICPC-2 PLUS and SNOMED CT. Terms contained in ICPC-2 PLUS but not in SNOMED CT caused a large proportion of the failures in the mappings. Mapping terminologies to a standard vocabulary is a way to facilitate consistent medical data exchange and achieve system interoperability and data standardisation. Broad scale mapping cannot be achieved by any single method and methods based on computational linguistics can be very useful for the task. Automating as much as is possible of this process turns the searching and mapping task into a validation task, which can effectively reduce the cost and increase the efficiency and accuracy of this task over manual methods.
Anzalone, Nicoletta; Castellano, Antonella; Cadioli, Marcello; Conte, Gian Marco; Cuccarini, Valeria; Bizzi, Alberto; Grimaldi, Marco; Costa, Antonella; Grillea, Giovanni; Vitali, Paolo; Aquino, Domenico; Terreni, Maria Rosa; Torri, Valter; Erickson, Bradley J; Caulo, Massimo
2018-06-01
Purpose To evaluate the feasibility of a standardized protocol for acquisition and analysis of dynamic contrast material-enhanced (DCE) and dynamic susceptibility contrast (DSC) magnetic resonance (MR) imaging in a multicenter clinical setting and to verify its accuracy in predicting glioma grade according to the new World Health Organization 2016 classification. Materials and Methods The local research ethics committees of all centers approved the study, and informed consent was obtained from patients. One hundred patients with glioma were prospectively examined at 3.0 T in seven centers that performed the same preoperative MR imaging protocol, including DCE and DSC sequences. Two independent readers identified the perfusion hotspots on maps of volume transfer constant (K trans ), plasma (v p ) and extravascular-extracellular space (v e ) volumes, initial area under the concentration curve, and relative cerebral blood volume (rCBV). Differences in parameters between grades and molecular subtypes were assessed by using Kruskal-Wallis and Mann-Whitney U tests. Diagnostic accuracy was evaluated by using receiver operating characteristic curve analysis. Results The whole protocol was tolerated in all patients. Perfusion maps were successfully obtained in 94 patients. An excellent interreader reproducibility of DSC- and DCE-derived measures was found. Among DCE-derived parameters, v p and v e had the highest accuracy (are under the receiver operating characteristic curve [A z ] = 0.847 and 0.853) for glioma grading. DSC-derived rCBV had the highest accuracy (A z = 0.894), but the difference was not statistically significant (P > .05). Among lower-grade gliomas, a moderate increase in both v p and rCBV was evident in isocitrate dehydrogenase wild-type tumors, although this was not significant (P > .05). Conclusion A standardized multicenter acquisition and analysis protocol of DCE and DSC MR imaging is feasible and highly reproducible. Both techniques showed a comparable, high diagnostic accuracy for grading gliomas. © RSNA, 2018 Online supplemental material is available for this article.
Field Guide to the Plant Community Types of Voyageurs National Park
Faber-Langendoen, Don; Aaseng, Norman; Hop, Kevin; Lew-Smith, Michael
2007-01-01
INTRODUCTION The objective of the U.S. Geological Survey-National Park Service Vegetation Mapping Program is to classify, describe, and map vegetation for most of the park units within the National Park Service (NPS). The program was created in response to the NPS Natural Resources Inventory and Monitoring Guidelines issued in 1992. Products for each park include digital files of the vegetation map and field data, keys and descriptions to the plant communities, reports, metadata, map accuracy verification summaries, and aerial photographs. Interagency teams work in each park and, following standardized mapping and field sampling protocols, develop products and vegetation classification standards that document the various vegetation types found in a given park. The use of a standard national vegetation classification system and mapping protocol facilitate effective resource stewardship by ensuring compatibility and widespread use of the information throughout the NPS as well as by other Federal and state agencies. These vegetation classifications and maps and associated information support a wide variety of resource assessment, park management, and planning needs, and provide a structure for framing and answering critical scientific questions about plant communities and their relation to environmental processes across the landscape. This field guide is intended to make the classification accessible to park visitors and researchers at Voyageurs National Park, allowing them to identify any stand of natural vegetation and showing how the classification can be used in conjunction with the vegetation map (Hop and others, 2001).
Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans
2014-08-01
Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.
Combining geostatistics with Moran's I analysis for mapping soil heavy metals in Beijing, China.
Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo
2012-03-01
Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran's I analysis was used to supplement the traditional geostatistics. According to Moran's I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran's I and the standardized Moran's I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran's I analysis was better than traditional geostatistics. Thus, Moran's I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.
Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China
Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo
2012-01-01
Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals. PMID:22690179
Soil pH Mapping with an On-The-Go Sensor
Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan
2011-01-01
Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r2) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany. PMID:22346591
Soil pH mapping with an on-the-go sensor.
Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan
2011-01-01
Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r(2)) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany.
Evolutionary games with self-questioning adaptive mechanism and the Ising model
NASA Astrophysics Data System (ADS)
Liu, J.; Xu, C.; Hui, P. M.
2017-09-01
A class of evolutionary games using a self-questioning strategy switching mechanism played in a population of connected agents is shown to behave as an Ising model Hamiltonian of spins connected in the same way. The payoff parameters combine to give the coupling between spins and an external magnetic field. The mapping covers the prisoner's dilemma, snowdrift and stag hunt games in structured populations. A well-mixed system is used to illustrate the equivalence. In a chain of agents/spins, the mapping to Ising model leads to an exact solution to the games effortlessly. The accuracy of standard approximations on the games can then be quantified. The site approximation is found to show varied accuracies depending on the payoff parameters, and the link approximation is shown to give the exact result in a chain but not in a closed form. The mapping established here connects two research areas, with each having much to offer to the other.
Applications of space technology to developing nations
NASA Technical Reports Server (NTRS)
Freden, S. C.
1976-01-01
The use of imagery from the Landsat spacecraft for the monitoring and management of natural resources in developing countries is discussed. The Landsat imagery can be used to make cartographic maps at scales of 1:250,000 which meet the US National Map Accuracy Standards, providing a means of map updating to correct for river meanders or changing shorelines. The Landsat data can also be used in defining and measuring agricultural areas, identifying pest breeding areas, and monitoring irrigation practices and crop performance. Total volume estimates can be obtained in many cases for surface bodies of water, and subsurface water supplies can be detected from changes in vegetation in some instances.
Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro
The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.
Magnetic Resonance Imaging for Patellofemoral Chondromalacia: Is There a Role for T2 Mapping?
van Eck, Carola F; Kingston, R Scott; Crues, John V; Kharrazi, F Daniel
2017-11-01
Patellofemoral pain is common, and treatment is guided by the presence and grade of chondromalacia. To evaluate and compare the sensitivity and specificity in detecting and grading chondral abnormalities of the patella between proton density fat suppression (PDFS) and T2 mapping magnetic resonance imaging (MRI). Cohort study; Level of evidence, 2. A total of 25 patients who underwent MRI of the knee with both a PDFS sequence and T2 mapping and subsequently underwent arthroscopic knee surgery were included. The cartilage surface of the patella was graded on both MRI sequences by 2 independent, blinded radiologists. Cartilage was then graded during arthroscopic surgery by a sports medicine fellowship-trained orthopaedic surgeon. Reliability, sensitivity, specificity, and accuracy were determined for both MRI methods. The findings during arthroscopic surgery were considered the gold standard. Intraobserver and interobserver agreement for both PDFS (98.5% and 89.4%, respectively) and T2 mapping (99.4% and 91.3%, respectively) MRI were excellent. For T2 mapping, the sensitivity (61%) and specificity (64%) were comparable, whereas for PDFS there was a lower sensitivity (37%) but higher specificity (81%) in identifying cartilage abnormalities. This resulted in a similar accuracy for PDFS (59%) and T2 mapping (62%). Both PDFS and T2 mapping MRI were reliable but only moderately accurate in predicting patellar chondromalacia found during knee arthroscopic surgery.
The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.; Walker, R. E.; Gokhman, B.
1985-01-01
Performance requirements regarding geometric accuracy have been defined in terms of end product goals, but until recently no precise details have been given concerning the conditions under which that accuracy is to be achieved. In order to achieve higher spatial and spectral resolutions, the Thematic Mapper (TM) sensor was designed to image in both forward and reverse mirror sweeps in two separate focal planes. Both hardware and software have been augmented and changed during the course of the Landsat TM developments to achieve improved geometric accuracy. An investigation has been conducted to determine if the TM meets the National Map Accuracy Standards for geometric accuracy at larger scales. It was found that TM imagery, in terms of geometry, has come close to, and in some cases exceeded, its stringent specifications.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China
NASA Astrophysics Data System (ADS)
Yang, Bo
2016-06-01
Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.
Accuracy of CNV Detection from GWAS Data.
Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S; Liu, Chunyu
2011-01-13
Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites--Birdsuite, Partek, HelixTree, and PennCNV-Affy--in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two "gold standards," the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a "gold standard" for detection of CNVs remains to be established.
This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...
Habib, A.; Jarvis, A.; Al-Durgham, M. M.; Lay, J.; Quackenbush, P.; Stensaas, G.; Moe, D.
2007-01-01
The mapping community is witnessing significant advances in available sensors, such as medium format digital cameras (MFDC) and Light Detection and Ranging (LiDAR) systems. In this regard, the Digital Photogrammetry Research Group (DPRG) of the Department of Geomatics Engineering at the University of Calgary has been actively involved in the development of standards and specifications for regulating the use of these sensors in mapping activities. More specifically, the DPRG has been working on developing new techniques for the calibration and stability analysis of medium format digital cameras. This research is essential since these sensors have not been developed with mapping applications in mind. Therefore, prior to their use in Geomatics activies, new standards should be developed to ensure the quality of the developed products. In another front, the persistent improvement in direct geo-referencing technology has led to an expansion in the use of LiDAR systems for the acquisition of dense and accurate surface information. However, the processing of the raw LiDAR data (e.g., ranges, mirror angles, and navigation data) remains a non-transparent process that is proprietary to the manufacturers of LiDAR systems. Therefore, the DPRG has been focusing on the development of quality control procedures to quantify the accuracy of LiDAR output in the absence of initial system measurements. This paper presents a summary of the research conducted by the DPRG together with the British Columbia Base Mapping and Geomatic Services (BMGS) and the United States Geological Survey (USGS) for the development of quality assurance and quality control procedures for emerging mapping technologies. The outcome of this research will allow for the possiblity of introducing North American Standards and Specifications to regulate the use of MFDC and LiDAR systems in the mapping industry.
Representation of Nursing Terminologies in UMLS
Kim, Tae Youn; Coenen, Amy; Hardiker, Nicholas; Bartz, Claudia C.
2011-01-01
There are seven nursing terminologies or classifications that are considered a standard to support nursing practice in the U.S. Harmonizing these terminologies will enhance the interoperability of clinical data documented across nursing practice. As a first step to harmonize the nursing terminologies, the purpose of this study was to examine how nursing problems or diagnostic concepts from select terminologies were cross-mapped in Unified Medical Language System (UMLS). A comparison analysis was conducted by examining whether cross-mappings available in UMLS through concept unique identifiers were consistent with cross-mappings conducted by human experts. Of 423 concepts from three terminologies, 411 (97%) were manually cross-mapped by experts to the International Classification for Nursing Practice. The UMLS semantic mapping among the 411 nursing concepts presented 33.6% accuracy (i.e., 138 of 411 concepts) when compared to expert cross-mappings. Further research and collaboration among experts in this field are needed for future enhancement of UMLS. PMID:22195127
Arya, Ravindra; Wilson, J Adam; Vannest, Jennifer; Byars, Anna W; Greiner, Hansel M; Buroker, Jason; Fujiwara, Hisako; Mangano, Francesco T; Holland, Katherine D; Horn, Paul S; Crone, Nathan E; Rose, Douglas F
2015-02-01
This study describes development of a novel language mapping approach using high-γ modulation in electrocorticograph (ECoG) during spontaneous conversation, and its comparison with electrical cortical stimulation (ECS) in childhood-onset drug-resistant epilepsy. Patients undergoing invasive pre-surgical monitoring and able to converse with the investigator were eligible. ECoG signals and synchronized audio were acquired during quiet baseline and during natural conversation between investigator and the patient. Using Signal Modeling for Real-time Identification and Event Detection (SIGFRIED) procedure, a statistical model for baseline high-γ (70-116 Hz) power, and a single score for each channel representing the probability that the power features in the experimental signal window belonged to the baseline model, were calculated. Electrodes with significant high-γ responses (HGS) were plotted on the 3D cortical model. Sensitivity, specificity, positive and negative predictive values (PPV, NPV), and classification accuracy were calculated compared to ECS. Seven patients were included (4 males, mean age 10.28 ± 4.07 years). Significant high-γ responses were observed in classic language areas in the left hemisphere plus in some homologous right hemispheric areas. Compared with clinical standard ECS mapping, the sensitivity and specificity of HGS mapping was 88.89% and 63.64%, respectively, and PPV and NPV were 35.29% and 96.25%, with an overall accuracy of 68.24%. HGS mapping was able to correctly determine all ECS+ sites in 6 of 7 patients and all false-sites (ECS+, HGS- for visual naming, n = 3) were attributable to only 1 patient. This study supports the feasibility of language mapping with ECoG HGS during spontaneous conversation, and its accuracy compared to traditional ECS. Given long-standing concerns about ecological validity of ECS mapping of cued language tasks, and difficulties encountered with its use in children, ECoG mapping of spontaneous language may provide a valid alternative for clinical use. Copyright © 2014 Elsevier B.V. All rights reserved.
Benson, John; Payabvash, Seyedmehdi; Salazar, Pascal; Jagadeesan, Bharathi; Palmer, Christopher S; Truwit, Charles L; McKinney, Alexander M
2015-04-01
To assess the accuracy and reliability of one vendor's (Vital Images, Toshiba Medical, Minnetonka, MN) automated CT perfusion (CTP) summary maps in identification and volume estimation of infarcted tissue in patients with acute middle cerebral artery (MCA) distribution infarcts. From 1085 CTP examinations over 5.5 years, 43 diffusion-weighted imaging (DWI)-positive patients were included who underwent both CTP and DWI <12 h after symptom onset, with another 43 age-matched patients as controls (DWI-negative). Automated delay-corrected postprocessing software (DC-SVD) generated both infarct "core only" and "core+penumbra" CTP summary maps. Three reviewers independently tabulated Alberta Stroke Program Early CT scores (ASPECTS) of both CTP summary maps and coregistered DWI. Of 86 included patients, 36 had DWI infarct volumes ≤70 ml, 7 had volumes >70 ml, and 43 were negative; the automated CTP "core only" map correctly classified each as >70 ml or ≤70 ml, while the "core+penumbra" map misclassified 4 as >70 ml. There were strong correlations between DWI volume with both summary map-based volumes: "core only" (r=0.93), and "core+penumbra" (r=0.77) (both p<0.0001). Agreement between ASPECTS scores of infarct core on DWI with summary maps was 0.65-0.74 for "core only" map, and 0.61-0.65 for "core+penumbra" (both p<0.0001). Using DWI-based ASPECTS scores as the standard, the accuracy of the CTP-based maps were 79.1-86.0% for the "core only" map, and 83.7-88.4% for "core+penumbra." Automated CTP summary maps appear to be relatively accurate in both the detection of acute MCA distribution infarcts, and the discrimination of volumes using a 70 ml threshold. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
A real-time standard parts inspection based on deep learning
NASA Astrophysics Data System (ADS)
Xu, Kuan; Li, XuDong; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.
Fitzpatrick, Katherine A.
1975-01-01
Accuracy analyses for the land use maps of the Central Atlantic Regional Ecological Test Site were performed for a 1-percent sample of the area. Researchers compared Level II land use maps produced at three scales, 1:24,000, 1:100,000, and 1:250,000 from high-altitude photography, with each other and with point data obtained in the field. They employed the same procedures to determine the accuracy of the Level I land use maps produced at 1:250,000 from high-altitude photography and color composite ERTS imagery. The accuracy of the Level II maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000, and 73.0 percent at 1:250,000. The accuracy of the Level I 1:250,000 maps produced from high-altitude aircraft photography was 76.5 percent and for those produced from ERTS imagery was 69.5 percent The cost of Level II land use mapping at 1:24,000 was found to be high ($11.93 per km2 ). The cost of mapping at 1:100,000 ($1.75) was about 2 times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent. Level I land use maps, when mapped from highaltitude photography, were about 4 times as expensive as the maps produced from ERTS imagery, although the accuracy is 7.0 percent greater. The Level I land use category that is least accurately mapped from ERTS imagery is urban and built-up land in the non-urban areas; in the urbanized areas, built-up land is more reliably mapped.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping
NASA Astrophysics Data System (ADS)
Mostafa, M. M. R.
2017-08-01
Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.
Rose, Kathryn V.; Nayegandhi, Amar; Moses, Christopher S.; Beavers, Rebecca; Lavoie, Dawn; Brock, John C.
2012-01-01
The National Park Service (NPS) Inventory and Monitoring (I&M) Program initiated a benthic habitat mapping program in ocean and coastal parks in 2008-2009 in alignment with the NPS Ocean Park Stewardship 2007-2008 Action Plan. With more than 80 ocean and Great Lakes parks encompassing approximately 2.5 million acres of submerged territory and approximately 12,000 miles of coastline (Curdts, 2011), this Servicewide Benthic Mapping Program (SBMP) is essential. This report presents an initial gap analysis of three pilot parks under the SBMP: Assateague Island National Seashore (ASIS), Channel Islands National Park (CHIS), and Sleeping Bear Dunes National Lakeshore (SLBE) (fig. 1). The recommended SBMP protocols include servicewide standards (for example, gap analysis, minimum accuracy, final products) as well as standards that can be adapted to fit network and park unit needs (for example, minimum mapping unit, mapping priorities). The SBMP requires the inventory and mapping of critical components of coastal and marine ecosystems: bathymetry, geoforms, surface geology, and biotic cover. In order for a park unit benthic inventory to be considered complete, maps of bathymetry and other key components must be combined into a final report (Moses and others, 2010). By this standard, none of the three pilot parks are mapped (inventoried) to completion with respect to submerged resources. After compiling the existing benthic datasets for these parks, this report has concluded that CHIS, with 49 percent of its submerged area mapped, has the most complete benthic inventory of the three. The ASIS submerged inventory is 41 percent complete, and SLBE is 17.5 percent complete.
Kabiri, Keivan; Rezai, Hamid; Moradi, Masoud
2018-04-01
High spatial resolution WorldView-2 (WV2) satellite imagery coupled with field observations have been utilized for mapping the coral reefs around Hendorabi Island in the northern Persian Gulf. In doing so, three standard multispectral bands (red, green, and blue) were selected to produce a classified map for benthic habitats. The in-situ observations were included photo-transects taken by snorkeling in water surface and manta tow technique. The satellite image has been classified using support vector machine (SVM) classifier by considering the information obtained from field measurements as both training and control points data. The results obtained from manta tow demonstrated that the mean total live hard coral coverage was 29.04% ± 2.44% around the island. Massive corals poritiids (20.70%) and branching corals acroporiids (20.33%) showed higher live coral coverage compared to other corals. Moreover, the map produced from satellite image illustrated the distribution of habitats with 78.1% of overall accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Airborne Laser/GPS Mapping of Assateague National Seashore Beach
NASA Technical Reports Server (NTRS)
Kradill, W. B.; Wright, C. W.; Brock, John C.; Swift, R. N.; Frederick, E. B.; Manizade, S. S.; Yungel, J. K.; Martin, C. F.; Sonntag, J. G.; Duffy, Mark;
1997-01-01
Results are presented from topographic surveys of the Assateague Island National Seashore using recently developed Airborne Topographic Mapper (ATM) and kinematic Global Positioning System (GPS) technology. In November, 1995, and again in May, 1996, the NASA Arctic Ice Mapping (AIM) group from the Goddard Space Flight Center's Wallops Flight Facility conducted the topographic surveys as a part of technology enhancement activities prior to conducting missions to measure the elevation of extensive sections of the Greenland Ice Sheet as part of NASA's Global Climate Change program. Differences between overlapping portions of both surveys are compared for quality control. An independent assessment of the accuracy of the ATM survey is provided by comparison to surface surveys which were conducted using standard techniques. The goal of these projects is to mdke these measurements to an accuracy of +/- 10 cm. Differences between the fall 1995 and 1996 surveys provides an assessment of net changes in the beach morphology over an annual cycle.
Land cover mapping in Latvia using hyperspectral airborne and simulated Sentinel-2 data
NASA Astrophysics Data System (ADS)
Jakovels, Dainis; Filipovs, Jevgenijs; Brauns, Agris; Taskovs, Juris; Erins, Gatis
2016-08-01
Land cover mapping in Latvia is performed as part of the Corine Land Cover (CLC) initiative every six years. The advantage of CLC is the creation of a standardized nomenclature and mapping protocol comparable across all European countries, thereby making it a valuable information source at the European level. However, low spatial resolution and accuracy, infrequent updates and expensive manual production has limited its use at the national level. As of now, there is no remote sensing based high resolution land cover and land use services designed specifically for Latvia which would account for the country's natural and land use specifics and end-user interests. The European Space Agency launched the Sentinel-2 satellite in 2015 aiming to provide continuity of free high resolution multispectral satellite data thereby presenting an opportunity to develop and adapted land cover and land use algorithm which accounts for national enduser needs. In this study, land cover mapping scheme according to national end-user needs was developed and tested in two pilot territories (Cesis and Burtnieki). Hyperspectral airborne data covering spectral range 400-2500 nm was acquired in summer 2015 using Airborne Surveillance and Environmental Monitoring System (ARSENAL). The gathered data was tested for land cover classification of seven general classes (urban/artificial, bare, forest, shrubland, agricultural/grassland, wetlands, water) and sub-classes specific for Latvia as well as simulation of Sentinel-2 satellite data. Hyperspectral data sets consist of 122 spectral bands in visible to near infrared spectral range (356-950 nm) and 100 bands in short wave infrared (950-2500 nm). Classification of land cover was tested separately for each sensor data and fused cross-sensor data. The best overall classification accuracy 84.2% and satisfactory classification accuracy (more than 80%) for 9 of 13 classes was obtained using Support Vector Machine (SVM) classifier with 109 band hyperspectral data. Grassland and agriculture land demonstrated lowest classification accuracy in pixel based approach, but result significantly improved by looking at agriculture polygons registered in Rural Support Service data as objects. The test of simulated Sentinel-2 bands for land cover mapping using SVM classifier showed 82.8% overall accuracy and satisfactory separation of 7 classes. SVM provided highest overall accuracy 84.2% in comparison to 75.9% for k-Nearest Neighbor and 79.2% Linear Discriminant Analysis classifiers.
Raymond L. Czaplewski
2003-01-01
No thematic map is perfect. Some pixels or polygons are not accurately classified, no matter how well the map is crafted. Therefore, thematic maps need metadata that sufficiently characterize the nature and degree of these imperfections. To decision-makers, an accuracy assessment helps judge the risks of using imperfect geospatial data. To analysts, an accuracy...
An accuracy assessment of forest disturbance mapping in the western Great Lakes
P.L. Zimmerman; I.W. Housman; C.H. Perry; R.A. Chastain; J.B. Webb; M.V. Finco
2013-01-01
The increasing availability of satellite imagery has spurred the production of thematic land cover maps based on satellite data. These maps are more valuable to the scientific community and land managers when the accuracy of their classifications has been assessed. Here, we assessed the accuracy of a map of forest disturbance in the watersheds of Lake Superior and Lake...
Common Calibration Source for Monitoring Long-term Ozone Trends
NASA Technical Reports Server (NTRS)
Kowalewski, Matthew
2004-01-01
Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.
High accuracy mapping with cartographic assessment for a fixed-wing remotely piloted aircraft system
NASA Astrophysics Data System (ADS)
Alves Júnior, Leomar Rufino; Ferreira, Manuel Eduardo; Côrtes, João Batista Ramos; de Castro Jorge, Lúcio André
2018-01-01
The lack of updated maps on large scale representations has encouraged the use of remotely piloted aircraft systems (RPAS) to generate maps for a wide range of professionals. However, some questions arise: do the orthomosaics generated by these systems have the cartographic precision required to use them? Which problems can be identified in stitching orthophotos to generate orthomosaics? To answer these questions, an aerophotogrammetric survey was conducted in an environmental conservation unit in the city of Goiânia. The flight plan was set up using the E-motion software, provided by Sensefly-a Swiss manufacturer of the RPAS Swinglet CAM used in this work. The camera installed in the RPAS was the Canon IXUS 220 HS, with the number of pixels in the sensor array of 12.1 megapixel, complementary metal oxide semiconductor 1 ∶ 2.3 ? (4000 × 3000 pixel), horizontal and vertical pixel sizes of 1.54 μm. Using the orthophotos, four orthomosaics were generated in the Pix4D mapper software. The first orthomosaic was generated without using the control points. The other three mosaics were generated using 4, 8, and 16 premarked ground control points. To check the precision and accuracy of the orthomosaics, 46 premarked targets were uniformly distributed in the block. The three-dimensional (3-D) coordinates of the premarked targets were read on the orthomosaic and compared with the coordinates obtained by the geodetic survey real-time kinematic positioning method using the global navigation satellite system receiver signals. The cartographic accuracy standard was evaluated by discrepancies between these coordinates. The bias was analyzed by the Student's t test and the accuracy by the chi-square probability considering the orthomosaic on a scale of 1 ∶ 250, in which 90% of the points tested must have a planimetric error of <0.13 m with a standard deviation of 0.08 m and altimetric errors of <0.30 m with a standard deviation of 0.20 m. It was observed that some buildings in the orthomosaics were not properly orthorectified. The orthomosaics generated with 8 or more points reached the scale of 1 ∶ 250, whereas without control points the scale was 10-fold smaller (1 ∶ 3000).
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
Evaluation of Masimo signal extraction technology pulse oximetry in anaesthetized pregnant sheep.
Quinn, Christopher T; Raisis, Anthea L; Musk, Gabrielle C
2013-03-01
Evaluation of the accuracy of Masimo signal extraction technology (SET) pulse oximetry in anaesthetized late gestational pregnant sheep. Prospective experimental study. Seventeen pregnant Merino ewes. Animals included in study were late gestation ewes undergoing general anaesthesia for Caesarean delivery or foetal surgery in a medical research laboratory. Masimo Radical-7 pulse oximetry (SpO(2) ) measurements were compared to co-oximetry (SaO(2) ) measurements from arterial blood gas analyses. The failure rate of the pulse oximeter was calculated. Accuracy was assessed by Bland & Altman's (2007) limits of agreement method. The effect of mean arterial blood pressure (MAP), perfusion index (PI) and haemoglobin (Hb) concentration on accuracy were assessed by regression analysis. Forty arterial blood samples paired with SpO(2) and blood pressure measurements were obtained. SpO(2) ranged from 42 to 99% and SaO(2) from 43.7 to 99.9%. MAP ranged from 24 to 82 mmHg, PI from 0.1 to 1.56 and Hb concentration from 71 to 114 g L(-1) . Masimo pulse oximetry measurements tended to underestimate oxyhaemoglobin saturation compared to co-oximetry with a bias (mean difference) of -2% and precision (standard deviation of the differences) of 6%. Accuracy appeared to decrease when SpO(2) was <75%, however numbers were too small for statistical comparisons. Hb concentration and PI had no significant effect on accuracy, whereas MAP was negatively correlated with SpO(2) bias. Masimo SET pulse oximetry can provide reliable and continuous monitoring of arterial oxyhaemoglobin saturation in anaesthetized pregnant sheep during clinically relevant levels of cardiopulmonary dysfunction. Further work is needed to assess pulse oximeter function during extreme hypotension and hypoxaemia. © 2012 The Authors. Veterinary Anaesthesia and Analgesia. © 2012 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesiologists.
NASA Astrophysics Data System (ADS)
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in other regions, such as winter wheat in Pakistan, soybean in Argentina and soybean in the entire South America. Similar levels of accuracy and timeliness were achieved as in the US.
Faber-Langendoen, D.; Aaseng, N.; Hop, K.; Lew-Smith, M.; Drake, J.
2007-01-01
Question: How can the U.S. National Vegetation Classification (USNVC) serve as an effective tool for classifying and mapping vegetation, and inform assessments and monitoring? Location: Voyageurs National Park, northern Minnesota, U.S.A and environs. The park contains 54 243 ha of terrestrial habitat in the sub-boreal region of North America. Methods: We classified and mapped the natural vegetation using the USNVC, with 'alliance' and 'association' as base units. We compiled 259 classification plots and 1251 accuracy assessment test plots. Both plot and type ordinations were used to analyse vegetation and environmental patterns. Color infrared aerial photography (1:15840 scale) was used for mapping. Polygons were manually drawn, then transferred into digital form. Classification and mapping products are stored in publicly available databases. Past fire and logging events were used to assess distribution of forest types. Results and Discussion: Ordination and cluster analyses confirmed 49 associations and 42 alliances, with three associations ranked as globally vulnerable to extirpation. Ordination provided a useful summary of vegetation and ecological gradients. Overall map accuracy was 82.4%. Pinus banksiana - Picea mariana forests were less frequent in areas unburned since the 1930s. Conclusion: The USNVC provides a consistent ecological tool for summarizing and mapping vegetation. The products provide a baseline for assessing forests and wetlands, including fire management. The standardized classification and map units provide local to continental perspectives on park resources through linkages to state, provincial, and national classifications in the U.S. and Canada, and to NatureServe's Ecological Systems classification. ?? IAVS; Opulus Press.
Certified ion implantation fluence by high accuracy RBS.
Colaux, Julien L; Jeynes, Chris; Heasman, Keith C; Gwilliam, Russell M
2015-05-07
From measurements over the last two years we have demonstrated that the charge collection system based on Faraday cups can robustly give near-1% absolute implantation fluence accuracy for our electrostatically scanned 200 kV Danfysik ion implanter, using four-point-probe mapping with a demonstrated accuracy of 2%, and accurate Rutherford backscattering spectrometry (RBS) of test implants from our quality assurance programme. The RBS is traceable to the certified reference material IRMM-ERM-EG001/BAM-L001, and involves convenient calibrations both of the electronic gain of the spectrometry system (at about 0.1% accuracy) and of the RBS beam energy (at 0.06% accuracy). We demonstrate that accurate RBS is a definitive method to determine quantity of material. It is therefore useful for certifying high quality reference standards, and is also extensible to other kinds of samples such as thin self-supporting films of pure elements. The more powerful technique of Total-IBA may inherit the accuracy of RBS.
Harold S.J. Zald; Janet L. Ohmann; Heather M. Roberts; Matthew J. Gregory; Emilie B. Henderson; Robert J. McGaughey; Justin Braaten
2014-01-01
This study investigated how lidar-derived vegetation indices, disturbance history from Landsat time series (LTS) imagery, plot location accuracy, and plot size influenced accuracy of statistical spatial models (nearest-neighbor imputation maps) of forest vegetation composition and structure. Nearest-neighbor (NN) imputation maps were developed for 539,000 ha in the...
NASA Astrophysics Data System (ADS)
de Oliveira, Cleber Gonzales; Paradella, Waldir Renato; da Silva, Arnaldo de Queiroz
The Brazilian Amazon is a vast territory with an enormous need for mapping and monitoring of renewable and non-renewable resources. Due to the adverse environmental condition (rain, cloud, dense vegetation) and difficult access, topographic information is still poor, and when available needs to be updated or re-mapped. In this paper, the feasibility of using Digital Surface Models (DSMs) extracted from TerraSAR-X Stripmap stereo-pair images for detailed topographic mapping was investigated for a mountainous area in the Carajás Mineral Province, located on the easternmost border of the Brazilian Amazon. The quality of the radargrammetric DSMs was evaluated regarding field altimetric measurements. Precise topographic field information acquired from a Global Positioning System (GPS) was used as Ground Control Points (GCPs) for the modeling of the stereoscopic DSMs and as Independent Check Points (ICPs) for the calculation of elevation accuracies. The analysis was performed following two ways: (1) the use of Root Mean Square Error (RMSE) and (2) calculations of systematic error (bias) and precision. The test for significant systematic error was based on the Student's-t distribution and the test of precision was based on the Chi-squared distribution. The investigation has shown that the accuracy of the TerraSAR-X Stripmap DSMs met the requirements for 1:50,000 map (Class A) as requested by the Brazilian Standard for Cartographic Accuracy. Thus, the use of TerraSAR-X Stripmap images can be considered a promising alternative for detailed topographic mapping in similar environments of the Amazon region, where available topographic information is rare or presents low quality.
Li, Yan; Chamberlain, Winston; Tan, Ou; Brass, Robert; Weiss, Jack L.; Huang, David
2016-01-01
PURPOSE To screen for subclinical keratoconus by analyzing corneal, epithelial, and stromal thickness map patterns with Fourier-domain optical coherence tomography (OCT). SETTING Four centers in the United States. DESIGN Cross-sectional observational study. METHODS Eyes of normal subjects, subclinical keratoconus eyes, and the topographically normal eye of a unilateral keratoconus patient were studied. Corneas were scanned using a 26 000 Hz Fourier-domain OCT system (RTVue). Normal subjects were divided into training and evaluation groups. Corneal, epithelial, and stromal thickness maps and derived diagnostic indices, including pattern standard deviation (PSD) variables and pachymetric map–based keratoconus risk scores were calculated from the OCT data. Area under the receiver operating characteristic curve (AUC) analysis was used to evaluate the diagnostic accuracy of the indices. RESULTS The study comprised 150 eyes of 83 normal subjects, 50 subclinical keratoconus eyes of 32 patients, and 1 topographically normal eye of a unilateral keratoconus patient. Subclinical keratoconus was characterized by inferotemporal thinning of the cornea, epithelium, and stroma. The PSD values for corneal (P < .001), epithelial (P < .001), and stromal (P = .049) thickness maps were all significantly higher in subclinical keratoconic eyes than in the normal group. The diagnostic accuracy was significantly higher for PSD variables (pachymetric PSD, AUC = 0.941; epithelial PSD, AUC = 0.985; stromal PSD, AUC = 0.924) than for the pachymetric map–based keratoconus risk score (AUC = 0.735). CONCLUSIONS High-resolution Fourier-domain OCT could map corneal, epithelial, and stromal thicknesses. Corneal and sublayer thickness changes in subclinical keratoconus could be detected with high accuracy using PSD variables. These new diagnostic variables might be useful in the detection of early keratoconus. PMID:27026454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegele, W.; Loeschel, R.; Dobler, B.
2011-02-15
Purpose: In this work, a novel stochastic framework for patient positioning based on linac-mounted CB projections is introduced. Based on this formulation, the most probable shifts and rotations of the patient are estimated, incorporating interfractional deformations of patient anatomy and other uncertainties associated with patient setup. Methods: The target position is assumed to be defined by and is stochastically determined from positions of various features such as anatomical landmarks or markers in CB projections, i.e., radiographs acquired with a CB-CT system. The patient positioning problem of finding the target location from CB projections is posed as an inverse problem withmore » prior knowledge and is solved using a Bayesian maximum a posteriori (MAP) approach. The prior knowledge is three-fold and includes the accuracy of an initial patient setup (such as in-room laser and skin marks), the plasticity of the body (relative shifts between target and features), and the feature detection error in CB projections (which may vary depending on specific detection algorithm and feature type). For this purpose, MAP estimators are derived and a procedure of using them in clinical practice is outlined. Furthermore, a rule of thumb is theoretically derived, relating basic parameters of the prior knowledge (initial setup accuracy, plasticity of the body, and number of features) and the parameters of CB data acquisition (number of projections and accuracy of feature detection) to the expected estimation accuracy. Results: MAP estimation can be applied to arbitrary features and detection algorithms. However, to experimentally demonstrate its applicability and to perform the validation of the algorithm, a water-equivalent, deformable phantom with features represented by six 1 mm chrome balls were utilized. These features were detected in the cone beam projections (XVI, Elekta Synergy) by a local threshold method for demonstration purposes only. The accuracy of estimation (strongly varying for different plasticity parameters of the body) agreed with the rule of thumb formula. Moreover, based on this rule of thumb formula, about 20 projections for 6 detectable features seem to be sufficient for a target estimation accuracy of 0.2 cm, even for relatively large feature detection errors with standard deviation of 0.5 cm and spatial displacements of the features with standard deviation of 0.5 cm. Conclusions: The authors have introduced a general MAP-based patient setup algorithm accounting for different sources of uncertainties, which are utilized as the prior knowledge in a transparent way. This new framework can be further utilized for different clinical sites, as well as theoretical developments in the field of patient positioning for radiotherapy.« less
Powell, E S; Pyburn, R E; Hill, E; Smith, K S; Ribbands, M S; Mickelborough, J; Pomeroy, V M
2002-09-01
Evaluation of the effectiveness of therapy to improve sitting balance has been hampered by the limited number of sensitive objective clinical measures. We developed the Manchester Active Position Seat (MAPS) to provide a portable system to track change in the position of centre of force over time. (1) To investigate whether there is correspondence between the measurement of position change by a forceplate and by MAPS. (2) To explore whether and how MAPS measures changes in position when seated healthy adults change posture. A feasibility study. (1) An adult subject sat on MAPS placed on top of a forceplate. The x and y coordinates of the centre of pressure recorded from the forceplate and centre of force from MAPS during movement were compared graphically. (2) Four adults sat on MAPS using a standardized starting position and moving into six sets of six standardized target postures in a predetermined randomized order. The absolute shift in centre of force from the starting position was calculated. (1) The pattern of change of position over time was similar for the forceplate and for MAPS although there was a measurement difference, which increased with distance from the centre. (2) The direction of change of position corresponded to the direction of movement to the target postures but the amount of change varied between subjects. MAPS shows promise as an objective clinical measure of sitting balance, but peripheral accuracy of measurement needs to be improved.
Development of large Area Covering Height Model
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2014-04-01
Height information is a basic part of topographic mapping. Only in special areas frequent update of height models is required, usually the update cycle is quite lower as for horizontal map information. Some height models are available free of charge in the internet; for commercial height models a fee has to be paid. Mostly digital surface models (DSM) with the height of the visible surface are given and not the bare ground height, as required for standard mapping. Nevertheless by filtering of DSM, digital terrain models (DTM) with the height of the bare ground can be generated with the exception of dense forest areas where no height of the bare ground is available. These height models may be better as the DTM of some survey administrations. In addition several DTM from national survey administrations are classified, so as alternative the commercial or free of charge available information from internet can be used. The widely used SRTM DSM is available also as ACE-2 GDEM corrected by altimeter data for systematic height errors caused by vegetation and orientation errors. But the ACE-2 GDEM did not respect neighbourhood information. With the worldwide covering TanDEM-X height model, distributed starting 2014 by Airbus Defence and Space (former ASTRIUM) as WorldDEM, higher level of details and accuracy is reached as with other large area covering height models. At first the raw-version of WorldDEM will be available, followed by an edited version and finally as WorldDEM-DTM a height model of the bare ground. With 12 m spacing and a relative standard deviation of 1.2 m within an area of 1° x 1° an accuracy and resolution level is reached, satisfying also for larger map scales. For limited areas with the HDEM also a height model with 6 m spacing and a relative vertical accuracy of 0.5 m can be generated on demand. By bathymetric LiDAR and stereo images also the height of the sea floor can be determined if the water has satisfying transparency. Another method of getting bathymetric height information is an analysis of the wave structure in optical and SAR-images. An overview about the absolute and relative accuracy, the consistency, error distribution and other characteristics as influence of terrain inclination and aspects is given. Partially by post processing the height models can or have to be improved.
Concept Mapping Improves Metacomprehension Accuracy among 7th Graders
ERIC Educational Resources Information Center
Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2012-01-01
Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…
X-ray absorption radiography for high pressure shock wave studies
NASA Astrophysics Data System (ADS)
Antonelli, L.; Atzeni, S.; Batani, D.; Baton, S. D.; Brambrink, E.; Forestier-Colleoni, P.; Koenig, M.; Le Bel, E.; Maheut, Y.; Nguyen-Bui, T.; Richetta, M.; Rousseaux, C.; Ribeyre, X.; Schiavi, A.; Trela, J.
2018-01-01
The study of laser compressed matter, both warm dense matter (WDM) and hot dense matter (HDM), is relevant to several research areas, including materials science, astrophysics, inertial confinement fusion. X-ray absorption radiography is a unique tool to diagnose compressed WDM and HDM. The application of radiography to shock-wave studies is presented and discussed. In addition to the standard Abel inversion to recover a density map from a transmission map, a procedure has been developed to generate synthetic radiographs using density maps produced by the hydrodynamics code DUED. This procedure takes into account both source-target geometry and source size (which plays a non negligible role in the interpretation of the data), and allows to reproduce transmission data with a good degree of accuracy.
NASA Astrophysics Data System (ADS)
Mafanya, Madodomzi; Tsele, Philemon; Botai, Joel; Manyama, Phetole; Swart, Barend; Monate, Thabang
2017-07-01
Invasive alien plants (IAPs) not only pose a serious threat to biodiversity and water resources but also have impacts on human and animal wellbeing. To support decision making in IAPs monitoring, semi-automated image classifiers which are capable of extracting valuable information in remotely sensed data are vital. This study evaluated the mapping accuracies of supervised and unsupervised image classifiers for mapping Harrisia pomanensis (a cactus plant commonly known as the Midnight Lady) using two interlinked evaluation strategies i.e. point and area based accuracy assessment. Results of the point-based accuracy assessment show that with reference to 219 ground control points, the supervised image classifiers (i.e. Maxver and Bhattacharya) mapped H. pomanensis better than the unsupervised image classifiers (i.e. K-mediuns, Euclidian Length and Isoseg). In this regard, user and producer accuracies were 82.4% and 84% respectively for the Maxver classifier. The user and producer accuracies for the Bhattacharya classifier were 90% and 95.7%, respectively. Though the Maxver produced a higher overall accuracy and Kappa estimate than the Bhattacharya classifier, the Maxver Kappa estimate of 0.8305 is not significantly (statistically) greater than the Bhattacharya Kappa estimate of 0.8088 at a 95% confidence interval. The area based accuracy assessment results show that the Bhattacharya classifier estimated the spatial extent of H. pomanensis with an average mapping accuracy of 86.1% whereas the Maxver classifier only gave an average mapping accuracy of 65.2%. Based on these results, the Bhattacharya classifier is therefore recommended for mapping H. pomanensis. These findings will aid in the algorithm choice making for the development of a semi-automated image classification system for mapping IAPs.
Thompson, Ryan F.
2014-01-01
Shoreline erosion rates along Lake Sharpe, a Missouri River reservoir, near the community of Lower Brule, South Dakota, were studied previously during 2011–12 by the U.S. Geological Survey, the Lower Brule Sioux Tribe, and Oglala Lakota College. The rapid shoreline retreat has caused many detrimental effects along the shoreline of Lake Sharpe, including losses of cultural sites, recreation access points, wildlife habitat, irrigated cropland, and landmass. The Lower Brule Sioux Tribe is considering options to reduce or stop erosion. One such option for consideration is the placement of discontinuous rock breakwater structures in shallow water to reduce wave action at shore. Information on the depth of water and stability characteristics of bottom material in nearshore areas of Lake Sharpe is needed by the Lower Brule Sioux Tribe to develop structural mitigation alternatives. To help address this need, a bathymetric survey of nearshore areas of Lake Sharpe near Lower Brule, South Dakota, was completed in 2013 by the U.S. Geological Survey in cooperation with the Lower Brule Sioux Tribe.HYPACK® hydrographic survey software was used to plan data collection transects for a 7-mile reach of Lake Sharpe shoreline near Lower Brule, South Dakota. Regular data collection transects and oblique transects were planned to allow for quality-assurance/quality-control comparisons.Two methods of data collection were used in the bathymetric survey: (1) measurement from a boat using bathymetric instrumentation where water was more than 2 feet deep, and (2) wading using Real-Time Kinematic Global Navigation Satellite System equipment on shore and where water was shallower than 2 feet deep. A dual frequency, 24- or 200-kilohertz narrow beam, depth transducer was used in conjunction with a Teledyne Odom CV100 dual frequency echosounder for boat-based data collection. In water too shallow for boat navigation, the elevation and nature of the reservoir bottom were mapped using Real-Time Kinematic Global Navigation Satellite System equipment.Once the data collection effort was completed, data editing was performed in HYPACK® to remove erroneous data points and to apply water-surface elevations. Maps were developed separately for water depth and bottom elevation for the study area. Lines of equal water depth for 2, 3, 3.5, 4, and 5 feet from the water surface to the lake bottom were mapped in nearshore areas of Lake Sharpe. Overall, water depths stay shallow for quite a distance from shore. In the 288 transects that crossed a 2 foot depth line, this depth occurred an average of 88 feet from shore. Similarly, in the 317 transects that crossed a 3 foot depth line, this did not occur until an average of 343 feet from shore. Elevation contours of the lake bottom were mapped primarily for elevations ranging from 1,419 to 1,416 feet above North American Vertical Datum of 1988.Horizontal errors of the Real-Time Kinematic Global Navigation Satellite System equipment for the study area are essentially inconsequential because water depth and bottom elevation were determined to change relatively slowly. The estimated vertical error associated with the Real-Time Kinematic Global Navigation Satellite System equipment for the study area ranges from 0.6 to 0.9 inch. This vertical error is small relative to the accuracy of the bathymetric data.Accuracy assessments of the data collected for this study were computed according to the National Standard for Spatial Data Accuracy. The maps showing the lines of equal water depth and elevation contours of the lake bottom are able to support a 1-foot contour interval at National Standards for Spatial Data Accuracy vertical accuracy standards, which require a vertical root mean squared error of 0.30 foot or better and a fundamental vertical accuracy calculated at the 95-percent confidence level of 0.60 foot or better.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Papageorgiou, Elpiniki I; Jayashree Subramanian; Karmegam, Akila; Papandrianos, Nikolaos
2015-11-01
Breast cancer is the most deadly disease affecting women and thus it is natural for women aged 40-49 years (who have a family history of breast cancer or other related cancers) to assess their personal risk for developing familial breast cancer (FBC). Besides, as each individual woman possesses different levels of risk of developing breast cancer depending on their family history, genetic predispositions and personal medical history, individualized care setting mechanism needs to be identified so that appropriate risk assessment, counseling, screening, and prevention options can be determined by the health care professionals. The presented work aims at developing a soft computing based medical decision support system using Fuzzy Cognitive Map (FCM) that assists health care professionals in deciding the individualized care setting mechanisms based on the FBC risk level of the given women. The FCM based FBC risk management system uses NHL to learn causal weights from 40 patient records and achieves a 95% diagnostic accuracy. The results obtained from the proposed model are in concurrence with the comprehensive risk evaluation tool based on Tyrer-Cuzick model for 38/40 patient cases (95%). Besides, the proposed model identifies high risk women by calculating higher accuracy of prediction than the standard Gail and NSAPB models. The testing accuracy of the proposed model using 10-fold cross validation technique outperforms other standard machine learning based inference engines as well as previous FCM-based risk prediction methods for BC. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Transfer of Technology for Cadastral Mapping in Tajikistan Using High Resolution Satellite Data
NASA Astrophysics Data System (ADS)
Kaczynski, R.
2012-07-01
European Commission funded project entitled: "Support to the mapping and certification capacity of the Agency of Land Management, Geodesy and Cartography" in Tajikistan was run by FINNMAP FM-International and Human Dynamics from Nov. 2006 to June 2011. The Agency of Land Management, Geodesy and Cartography is the state agency responsible for development, implementation, monitoring and evaluation of state policies on land tenure and land management, including the on-going land reform and registration of land use rights. The specific objective was to support and strengthen the professional capacity of the "Fazo" Institute in the field of satellite geodesy, digital photogrammetry, advanced digital satellite image processing of high resolution satellite data and digital cartography. Lectures and on-the-job trainings for the personnel of "Fazo" and Agency in satellite geodesy, digital photogrammetry, cartography and the use of high resolution satellite data for cadastral mapping have been organized. Standards and Quality control system for all data and products have been elaborated and implemented in the production line. Technical expertise and trainings in geodesy, photogrammetry and satellite image processing to the World Bank project "Land Registration and Cadastre System for Sustainable Agriculture" has also been completed in Tajikistan. The new map projection was chosen and the new unclassified geodetic network has been established for all of the country in which all agricultural parcel boundaries are being mapped. IKONOS, QuickBird and WorldView1 panchromatic data have been used for orthophoto generation. Average accuracy of space triangulation of non-standard (long up to 90km) satellite images of QuickBird Pan and IKONOS Pan on ICPs: RMSEx = 0.5m and RMSEy = 0.5m have been achieved. Accuracy of digital orthophoto map is RMSExy = 1.0m. More then two and half thousands of digital orthophoto map sheets in the scale of 1:5000 with pixel size 0.5m have been produced so far by the "Fazo" Institute in Tajikistan on the basis of technology elaborated in the framework of this project. Digital cadastral maps are produced in "Fazo" and Cadastral Regional Centers in Tajikistan using ArcMap software. These digital orthophotomaps will also be used for digital mapping of water resources and other needs of the country.
Tran, Annelise; Trevennec, Carlène; Lutwama, Julius; Sserugga, Joseph; Gély, Marie; Pittiglio, Claudia; Pinto, Julio; Chevalier, Véronique
2016-01-01
Rift Valley fever (RVF), a mosquito-borne disease affecting ruminants and humans, is one of the most important viral zoonoses in Africa. The objective of the present study was to develop a geographic knowledge-based method to map the areas suitable for RVF amplification and RVF spread in four East African countries, namely, Kenya, Tanzania, Uganda and Ethiopia, and to assess the predictive accuracy of the model using livestock outbreak data from Kenya and Tanzania. Risk factors and their relative importance regarding RVF amplification and spread were identified from a literature review. A numerical weight was calculated for each risk factor using an analytical hierarchy process. The corresponding geographic data were collected, standardized and combined based on a weighted linear combination to produce maps of the suitability for RVF transmission. The accuracy of the resulting maps was assessed using RVF outbreak locations in livestock reported in Kenya and Tanzania between 1998 and 2012 and the ROC curve analysis. Our results confirmed the capacity of the geographic information system-based multi-criteria evaluation method to synthesize available scientific knowledge and to accurately map (AUC = 0.786; 95% CI [0.730–0.842]) the spatial heterogeneity of RVF suitability in East Africa. This approach provides users with a straightforward and easy update of the maps according to data availability or the further development of scientific knowledge. PMID:27631374
NASA Astrophysics Data System (ADS)
Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.
2016-06-01
The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.
Holder, Jourdan T; Kessler, David M; Noble, Jack H; Gifford, René H; Labadie, Robert F
2018-06-01
To quantify and compare the number of cochlear implant (CI) electrodes found to be extracochlear on postoperative computerized tomography (CT) scans, the number of basal electrodes deactivated during standard CI mapping (without knowledge of the postoperative CT scan), and the extent of electrode insertion noted by the surgeon. Retrospective. Academic Medical Center. Two hundred sixty-two patients underwent standard cochlear implantation and postoperative temporal bone CT scanning. Scans were analyzed to determine the number of extracochlear electrodes. Standard CI programming had been completed without knowledge of the extracochlear electrodes identified on the CT. These standard CI maps were reviewed to record the number of deactivated basal electrodes. Lastly, each operative report was reviewed to record the extent of reported electrode insertion. 13.4% (n = 35) of CIs were found to have at least one electrode outside of the cochlea on the CT scan. Review of CI mapping indicated that audiologists had deactivated extracochlear electrodes in 60% (21) of these cases. Review of operative reports revealed that surgeons correctly indicated the number of extracochlear electrodes in 6% (2) of these cases. Extracochlear electrodes were correctly identified audiologically in 60% of cases and in surgical reports in 6% of cases; however, it is possible that at least a portion of these cases involved postoperative electrode migration. Given these findings, postoperative CT scans can provide information regarding basal electrode location, which could help improve programming accuracy, associated frequency allocation, and audibility with appropriate deactivation of extracochlear electrodes.
Vegetation inventory, mapping, and classification report, Fort Bowie National Historic Site
Studd, Sarah; Fallon, Elizabeth; Crumbacher, Laura; Drake, Sam; Villarreal, Miguel
2013-01-01
A vegetation mapping and characterization effort was conducted at Fort Bowie National Historic Site in 2008-10 by the Sonoran Desert Network office in collaboration with researchers from the Office of Arid lands studies, Remote Sensing Center at the University of Arizona. This vegetation mapping effort was completed under the National Park Service Vegetation Inventory program which aims to complete baseline mapping inventories at over 270 national park units. The vegetation map data was collected to provide park managers with a digital map product that met national standards of spatial and thematic accuracy, while also placing the vegetation into a regional and even national context. Work comprised of three major field phases 1) concurrent field-based classification data collection and mapping (map unit delineation), 2) development of vegetation community types at the National Vegetation Classification alliance or association level and 3) map accuracy assessment. Phase 1 was completed in late 2008 and early 2009. Community type descriptions were drafted to meet the then-current hierarchy (version 1) of the National Vegetation Classification System (NVCS) and these were applied to each of the mapped areas. This classification was developed from both plot level data and censused polygon data (map units) as this project was conducted as a concurrent mapping and classification effort. The third stage of accuracy assessment completed in the fall of 2010 consisted of a complete census of each map unit and was conducted almost entirely by park staff. Following accuracy assessment the map was amended where needed and final products were developed including this report, a digital map and full vegetation descriptions. Fort Bowie National Historic Site covers only 1000 acres yet has a relatively complex landscape, topography and geology. A total of 16 distinct communities were described and mapped at Fort Bowie NHS. These ranged from lush riparian woodlands lining the ephemeral washes dominated by Ash (Fraxinus), Walnut (Juglans) and Hackberry (Celtis) to drier upland sites typical of desert scrub and semi-desert grassland communities. These shrublands boast a diverse mixture of shrubs, succulents and perennial grasses. In many places the vegetation could be seen to echo the history of the fort site, with management of shrub encroachment apparent in the grasslands and the paucity of trees evidence of historic cutting for timber and fire wood. Seven of the 16 vegetation types were ‘accepted’ types within the NVC while the others have been described here as specific to FOBO and have proposed status within the NVC. The map was designed to facilitate ecologically-based natural resources management and research. The map is in digital format within a geodatabase structure that allows for complex relationships to be established between spatial and tabular data, and makes accessing the product easy and seamless. The GIS format allows user flexibility and will also enable updates to be made as new information becomes available (such as revised NVC codes or vegetation type names) or in the event of major disturbance events that could impact the vegetation.
[Who Hits the Mark? A Comparative Study of the Free Geocoding Services of Google and OpenStreetMap].
Lemke, D; Mattauch, V; Heidinger, O; Hense, H W
2015-09-01
Geocoding, the process of converting textual information (addresses) into geographic coordinates is increasingly used in public health/epidemiological research and practice. To date, little attention has been paid to geocoding quality and its impact on different types of spatially-related health studies. The primary aim of this study was to compare 2 freely available geocoding services (Google and OpenStreetMap) with regard to matching rate (percentage of address records capable of being geocoded) and positional accuracy (distance between geocodes and the ground truth locations). Residential addresses were geocoded by the NRW state office for information and technology and were considered as reference data (gold standard). The gold standard included the coordinates, the quality of the addresses (4 categories), and a binary urbanity indicator based on the CORINE land cover data. 2 500 addresses were randomly sampled after stratification for address quality and urbanity indicator (approximately 20 000 addresses). These address samples were geocoded using the geocoding services from Google and OSM. In general, both geocoding services showed a decrease in the matching rate with decreasing address quality and urbanity. Google showed consistently a higher completeness than OSM (>93 vs. >82%). Also, the cartographic confounding between urban and rural regions was less distinct with Google's geocoding API. Regarding the positional accuracy of the geo-coordinates, Google also showed the smallest deviations from the reference coordinates, with a median of <9 vs. <175.8 m. The cumulative density function derived from the positional accuracy showed for Google that nearly 95% and for OSM 50% of the addresses were geocoded within <50 m of their reference coordinates. The geocoding API from Google is superior to OSM regarding completeness and positional accuracy of the geocoded addresses. On the other hand, Google has several restrictions, such as the limitation of the requests to 2 500 addresses per 24 h and the presentation of the results exclusively on Google Maps, which may complicate the use for scientific purposes. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Alexander, R. H. (Principal Investigator); Fitzpatrick, K. A.
1975-01-01
The author has identified the following significant results. Level 2 land use maps produced at three scales (1:24,000, 1:100,000, and 1:250,000) from high altitude photography were compared with each other and with point data obtained in the field. The same procedures were employed to determine the accuracy of the Level 1 land use maps produced at 1:250,000 from high altitude photography and color composite ERTS imagery. Accuracy of the Level 2 maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000 and 73.0 percent at 1:250,000. Accuracy of the Level 1 1:250,000 maps was 76.5 percent for aerial photographs and 69.5 percent for ERTS imagery. The cost of Level 2 land use mapping at 1:24,000 was found to be high ($11.93 per sq km). The cost of mapping at 1:100,000 ($1.75) was about two times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent.
Evaluation of MRI sequences for quantitative T1 brain mapping
NASA Astrophysics Data System (ADS)
Tsialios, P.; Thrippleton, M.; Glatz, A.; Pernet, C.
2017-11-01
T1 mapping constitutes a quantitative MRI technique finding significant application in brain imaging. It allows evaluation of contrast uptake, blood perfusion, volume, providing a more specific biomarker of disease progression compared to conventional T1-weighted images. While there are many techniques for T1-mapping there is a wide range of reported T1-values in tissues, raising the issue of protocols reproducibility and standardization. The gold standard for obtaining T1-maps is based on acquiring IR-SE sequence. Widely used alternative sequences are IR-SE-EPI, VFA (DESPOT), DESPOT-HIFI and MP2RAGE that speed up scanning and fitting procedures. A custom MRI phantom was used to assess the reproducibility and accuracy of the different methods. All scans were performed using a 3T Siemens Prisma scanner. The acquired data processed using two different codes. The main difference was observed for VFA (DESPOT) which grossly overestimated T1 relaxation time by 214 ms [126 270] compared to the IR-SE sequence. MP2RAGE and DESPOT-HIFI sequences gave slightly shorter time than IR-SE (~20 to 30ms) and can be considered as alternative and time-efficient methods for acquiring accurate T1 maps of the human brain, while IR-SE-EPI gave identical result, at a cost of a lower image quality.
Simulation of seagrass bed mapping by satellite images based on the radiative transfer model
NASA Astrophysics Data System (ADS)
Sagawa, Tatsuyuki; Komatsu, Teruhisa
2015-06-01
Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.
Uncertainty of OpenStreetMap data for the road network in Cyprus
NASA Astrophysics Data System (ADS)
Demetriou, Demetris
2016-08-01
Volunteered geographic information (VGI) refers to the geographic data compiled and created by individuals which are rendered on the Internet through specific web-based tools for diverse areas of interest. One of the most well-known VGI projects is the OpenStreetMap (OSM) that provides worldwide free geospatial data representing a variety of features. A critical issue for all VGI initiatives is the quality of the information offered. Thus, this report looks into the uncertainty of the OSM dataset for the main road network in Cyprus. The evaluation is based on three basic quality standards, namely positional accuracy, completeness and attribute accuracy. The work has been carried out by employing the Model Builder of ArcGIS which facilitated the comparison between the OSM data and the authoritative data provided by the Public Works Department (PWD). Findings showed that the positional accuracy increases with the hierarchical level of a road, it varies per administrative District and around 70% of the roads have a positional accuracy within 6m compared to the reference dataset. Completeness in terms of road length difference is around 25% for three out of four road categories examined and road name completeness is 100% and around 40% for higher and lower level roads, respectively. Attribute accuracy focusing on road name is very high for all levels of roads. These outputs indicate that OSM data are good enough if they fit for the purpose of use. Furthermore, the study revealed some weaknesses of the methods used for calculating the positional accuracy, suggesting the need for methodological improvements.
Research on oral test modeling based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Shi, Yuliang; Tao, Yiyue; Lei, Jun
2018-04-01
In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.
Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.
Everitt, J H; Yang, C
2007-11-01
A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.
Fusion of pan-tropical biomass maps using weighted averaging and regional calibration data
NASA Astrophysics Data System (ADS)
Ge, Yong; Avitabile, Valerio; Heuvelink, Gerard B. M.; Wang, Jianghao; Herold, Martin
2014-09-01
Biomass is a key environmental variable that influences many biosphere-atmosphere interactions. Recently, a number of biomass maps at national, regional and global scales have been produced using different approaches with a variety of input data, such as from field observations, remotely sensed imagery and other spatial datasets. However, the accuracy of these maps varies regionally and is largely unknown. This research proposes a fusion method to increase the accuracy of regional biomass estimates by using higher-quality calibration data. In this fusion method, the biases in the source maps were first adjusted to correct for over- and underestimation by comparison with the calibration data. Next, the biomass maps were combined linearly using weights derived from the variance-covariance matrix associated with the accuracies of the source maps. Because each map may have different biases and accuracies for different land use types, the biases and fusion weights were computed for each of the main land cover types separately. The conceptual arguments are substantiated by a case study conducted in East Africa. Evaluation analysis shows that fusing multiple source biomass maps may produce a more accurate map than when only one biomass map or unweighted averaging is used.
Airborne laser mapping of Assateague National Seashore Beach
Krabill, W.B.; Wright, C.W.; Swift, R.N.; Frederick, E.B.; Manizade, S.S.; Yungel, J.K.; Martin, C.F.; Sonntag, J.G.; Duffy, Mark; Hulslander, William; Brock, John C.
2000-01-01
Results are presented from topographic surveys of the Assateague Island National Seashore using an airborne scanning laser altimeter and kinematic Global Positioning System (GPS) technology. The instrument used was the Airborne Topographic Mapper (ATM), developed by the NASA Arctic Ice Mapping (AIM) group from the Goddard Space Flight Center's Wallops Flight Facility. In November, 1995, and again in May, 1996, these topographic surveys were flown as a functionality check prior to conducting missions to measure the elevation of extensive sections of the Greenland Ice Sheet as part of NASA's Global Climate Change program. Differences between overlapping portions of both surveys are compared for quality control. An independent assessment of the accuracy of the ATM survey is provided by comparison to surface surveys which were conducted using standard techniques. The goal of these projects is to make these measurements to an accuracy of ± 10 cm. Differences between the fall 1995 and 1996 surveys provides an assessment of net changes in the beach morphology over an annual cycle.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
Land use mapping and modelling for the Phoenix Quadrangle
NASA Technical Reports Server (NTRS)
Place, J. L. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The mapping of generalized land use (level 1) from ERTS 1 images was shown to be feasible with better than 95% accuracy in the Phoenix quadrangle. The accuracy of level 2 mapping in urban areas is still a problem. Updating existing maps also proved to be feasible, especially in water categories and agricultural uses; however, expanding urban growth has presented with accuracy. ERTS 1 film images indicated where areas of change were occurring, thus aiding focusing-in for more detailed investigation. ERTS color composite transparencies provided a cost effective source of information for land use mapping of very large regions at small map scales.
Streby, Henry M.; Loegering, John P.; Andersen, David E.
2012-01-01
Studies of songbird breeding habitat often compare habitat characteristics of used and unused areas. Although there is usually meticulous effort to precisely and consistently measure habitat characteristics, accuracy of methods for estimating which areas are used versus which are unused by birds remains generally untested. To examine accuracy of spot-mapping to identify singing territories of golden-winged warblers (Vermivora chrysoptera), which are considered an early successional forest specialists, we used spot-mapping and radiotelemetry to record song perches and delineate song territories for breeding male golden-winged warblers in northwestern Minnesota, USA. We also used radiotelemetry to record locations (song and nonsong perches) of a subsample (n = 12) of males throughout the day to delineate home ranges. We found that telemetry-based estimates of song territories were 3 times larger and included more mature forest than those estimated from spot-mapping. In addition, home ranges estimated using radiotelemetry included more mature forest than spot-mapping- and telemetry-based song territories, with 75% of afternoon perches located in mature forest. Our results suggest that mature forest comprises a larger component of golden-winged warbler song territories and home ranges than is indicated based on spot-mapping in Minnesota. Because it appears that standard observational methods can underestimate territory size and misidentify cover-type associations for golden-winged warblers, we caution that management and conservation plans may be misinformed, and that similar studies are needed for golden-winged warblers across their range and for other songbird species.
Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2
NASA Astrophysics Data System (ADS)
Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.
2017-05-01
In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.
NASA Astrophysics Data System (ADS)
Zhao, L.; Fu, X.; Dou, X.; Liu, H.; Fang, Z.
2018-04-01
The ZY-3 is the civil high-resolution optical stereoscopic mapping satellite independently developed by China. The ZY-3 constellation of the twin satellites operates in a sun-synchronous, near-polar, circular 505 km orbit, with a descending location time of 10:30 AM and a 29-day revisiting period. The panchromatic triplet sensors, pointing forward, nadir, and backward with an angle of 22°, have excellent base-to-height ratio, which is beneficial to the extraction of DEM. In order to extract more detailed and highprecision DEM, the ZY-3 (02) satellite has been upgraded based on the ZY-3 (01), and the GSD of the stereo camera has been optimized from 3.5 to 2.5 meters. In the paper case studies using the ZY-3 01 and the 02 satellite data for block adjustment and DEM extraction have been carried out in Liaoning Province of China. The results show that the planimetric and altimetric accuracy can reach 3 meters, which meet the mapping requirements of 1 : 50,000 national topographic map and the design performance of the satellites. The normalized elevation accuracy index (NEAI) is adopted to evaluate the twin satellite stereoscopic performance, and the NEAIs of the twin ZY-3 satellites are good and the index of the ZY-3(02) is slightly better. The comparison of the overlapping DEMs from the twin ZY-3 satellites and SRTM is analysed. The bias and the standard deviation of all the DEMs are better than 5 meters. In addition, in the process of accuracy comparison, some gross errors of the DEM can be identified, and some elevation changes of the DEM can also be found. The differential DEM becomes a new tool and application.
2013-01-01
Background Cardiovascular magnetic resonance (CMR) T1 mapping indices, such as T1 time and partition coefficient (λ), have shown potential to assess diffuse myocardial fibrosis. The purpose of this study was to investigate how scanner and field strength variation affect the accuracy and precision/reproducibility of T1 mapping indices. Methods CMR studies were performed on two 1.5T and three 3T scanners. Eight phantoms were made to mimic the T1/T2 of pre- and post-contrast myocardium and blood at 1.5T and 3T. T1 mapping using MOLLI was performed with simulated heart rate of 40-100 bpm. Inversion recovery spin echo (IR-SE) was the reference standard for T1 determination. Accuracy was defined as the percent error between MOLLI and IR-SE, and scan/re-scan reproducibility was defined as the relative percent mean difference between repeat MOLLI scans. Partition coefficient was estimated by ΔR1myocardium phantom/ΔR1blood phantom. Generalized linear mixed model was used to compare the accuracy and precision/reproducibility of T1 and λ across field strength, scanners, and protocols. Results Field strength significantly affected MOLLI T1 accuracy (6.3% error for 1.5T vs. 10.8% error for 3T, p<0.001) but not λ accuracy (8.8% error for 1.5T vs. 8.0% error for 3T, p=0.11). Partition coefficients of MOLLI were not different between two 1.5T scanners (47.2% vs. 47.9%, p=0.13), and showed only slight variation across three 3T scanners (49.2% vs. 49.8% vs. 49.9%, p=0.016). Partition coefficient also had significantly lower percent error for precision (better scan/re-scan reproducibility) than measurement of individual T1 values (3.6% for λ vs. 4.3%-4.8% for T1 values, approximately, for pre/post blood and myocardium values). Conclusion Based on phantom studies, T1 errors using MOLLI ranged from 6-14% across various MR scanners while errors for partition coefficient were less (6-10%). Compared with absolute T1 times, partition coefficient showed less variability across platforms and field strengths as well as higher precision. PMID:23890156
Baryon Acoustic Oscillations reconstruction with pixels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele
2017-09-01
Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less
Single-edition quadrangle maps
,
1998-01-01
In August 1993, the U.S. Geological Survey's (USGS) National Mapping Division and the U.S. Department of Agriculture's Forest Service signed an Interagency Agreement to begin a single-edition joint mapping program. This agreement established the coordination for producing and maintaining single-edition primary series topographic maps for quadrangles containing National Forest System lands. The joint mapping program saves money by eliminating duplication of effort by the agencies and results in a more frequent revision cycle for quadrangles containing national forests. Maps are revised on the basis of jointly developed standards and contain normal features mapped by the USGS, as well as additional features required for efficient management of National Forest System lands. Single-edition maps look slightly different but meet the content, accuracy, and quality criteria of other USGS products. The Forest Service is responsible for the land management of more than 191 million acres of land throughout the continental United States, Alaska, and Puerto Rico, including 155 national forests and 20 national grasslands. These areas make up the National Forest System lands and comprise more than 10,600 of the 56,000 primary series 7.5-minute quadrangle maps (15-minute in Alaska) covering the United States. The Forest Service has assumed responsibility for maintaining these maps, and the USGS remains responsible for printing and distributing them. Before the agreement, both agencies published similar maps of the same areas. The maps were used for different purposes, but had comparable types of features that were revised at different times. Now, the two products have been combined into one so that the revision cycle is stabilized and only one agency revises the maps, thus increasing the number of current maps available for National Forest System lands. This agreement has improved service to the public by requiring that the agencies share the same maps and that the maps meet a common standard, as well as by significantly reducing duplication of effort.
An automated approach to measuring child movement and location in the early childhood classroom.
Irvin, Dwight W; Crutchfield, Stephen A; Greenwood, Charles R; Kearns, William D; Buzhardt, Jay
2018-06-01
Children's movement is an important issue in child development and outcome in early childhood research, intervention, and practice. Digital sensor technologies offer improvements in naturalistic movement measurement and analysis. We conducted validity and feasibility testing of a real-time, indoor mapping and location system (Ubisense, Inc.) within a preschool classroom. Real-time indoor mapping has several implications with respect to efficiently and conveniently: (a) determining the activity areas where children are spending the most and least time per day (e.g., music); and (b) mapping a focal child's atypical real-time movements (e.g., lapping behavior). We calibrated the accuracy of Ubisense point-by-point location estimates (i.e., X and Y coordinates) against laser rangefinder measurements using several stationary points and atypical movement patterns as reference standards. Our results indicate that activity areas occupied and atypical movement patterns could be plotted with an accuracy of 30.48 cm (1 ft) using a Ubisense transponder tag attached to the participating child's shirt. The accuracy parallels findings of other researchers employing Ubisense to study atypical movement patterns in individuals at risk for dementia in an assisted living facility. The feasibility of Ubisense was tested in an approximately 90-min assessment of two children, one typically developing and one with Down syndrome, during natural classroom activities, and the results proved positive. Implications for employing Ubisense in early childhood classrooms as a data-based decision-making tool to support children's development and its potential integration with other wearable sensor technologies are discussed.
Doble, Brett; Lorgelly, Paula
2016-04-01
To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.
Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A
2015-01-23
In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.
Developmental Changes in Cross-Situational Word Learning: The Inverse Effect of Initial Accuracy
ERIC Educational Resources Information Center
Fitneva, Stanka A.; Christiansen, Morten H.
2017-01-01
Intuitively, the accuracy of initial word-referent mappings should be positively correlated with the outcome of learning. Yet recent evidence suggests an inverse effect of initial accuracy in adults, whereby greater accuracy of initial mappings is associated with poorer outcomes in a cross-situational learning task. Here, we examine the impact of…
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES
Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...
NASA Technical Reports Server (NTRS)
Wynn, L. K.
1985-01-01
The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.
A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing
Ozdogan, Mutlu
2014-01-01
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283
A practical and automated approach to large area forest disturbance mapping with remote sensing.
Ozdogan, Mutlu
2014-01-01
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.
Spatial Patterns of NLCD Land Cover Change Thematic Accuracy (2001 - 2011)
Research on spatial non-stationarity of land cover classification accuracy has been ongoing for over two decades. We extend the understanding of thematic map accuracy spatial patterns by: 1) quantifying spatial patterns of map-reference agreement for class-specific land cover c...
de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn
2016-11-01
Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.
David M. Bell; Matthew J. Gregory; Heather M. Roberts; Raymond J. Davis; Janet L. Ohmann
2015-01-01
Accuracy assessments of remote sensing products are necessary for identifying map strengths and weaknesses in scientific and management applications. However, not all accuracy assessments are created equal. Motivated by a recent study published in Forest Ecology and Management (Volume 342, pages 8â20), we explored the potential limitations of accuracy assessments...
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui
2015-08-13
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.
Dilthey, Alexander T; Gourraud, Pierre-Antoine; Mentzer, Alexander J; Cereb, Nezih; Iqbal, Zamin; McVean, Gil
2016-10-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30-250 CPU hours per sample) remain a significant challenge to practical application.
High-Accuracy HLA Type Inference from Whole-Genome Sequencing Data Using Population Reference Graphs
Dilthey, Alexander T.; Gourraud, Pierre-Antoine; McVean, Gil
2016-01-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30–250 CPU hours per sample) remain a significant challenge to practical application. PMID:27792722
Osman, Reham B; Alharbi, Nawal; Wismeijer, Daniel
The aim of this study was to evaluate the effect of the build orientation/build angle on the dimensional accuracy of full-coverage dental restorations manufactured using digital light-processing technology (DLP-AM). A full dental crown was digitally designed and 3D-printed using DLP-AM. Nine build angles were used: 90, 120, 135, 150, 180, 210, 225, 240, and 270 degrees. The specimens were digitally scanned using a high-resolution optical surface scanner (IScan D104i, Imetric). Dimensional accuracy was evaluated using the digital subtraction technique. The 3D digital files of the scanned printed crowns (test model) were exported in standard tessellation language (STL) format and superimposed on the STL file of the designed crown [reference model] using Geomagic Studio 2014 (3D Systems). The root mean square estimate (RMSE) values were evaluated, and the deviation patterns on the color maps were further assessed. The build angle influenced the dimensional accuracy of 3D-printed restorations. The lowest RMSE was recorded for the 135-degree and 210-degree build angles. However, the overall deviation pattern on the color map was more favorable with the 135-degree build angle in contrast with the 210-degree build angle where the deviation was observed around the critical marginal area. Within the limitations of this study, the recommended build angle using the current DLP system was 135 degrees. Among the selected build angles, it offers the highest dimensional accuracy and the most favorable deviation pattern. It also offers a self-supporting crown geometry throughout the building process.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.
Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K
2017-07-01
The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.
Active machine learning for rapid landslide inventory mapping with VHR satellite images (Invited)
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2013-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
Predicting Sargassum blooms in the Caribbean Sea from MODIS observations
NASA Astrophysics Data System (ADS)
Wang, Mengqiu; Hu, Chuanmin
2017-04-01
Recurrent and significant Sargassum beaching events in the Caribbean Sea (CS) have caused serious environmental and economic problems, calling for a long-term prediction capacity of Sargassum blooms. Here we present predictions based on a hindcast of 2000-2016 observations from Moderate Resolution Imaging Spectroradiometer (MODIS), which showed Sargassum abundance in the CS and the Central West Atlantic (CWA), as well as connectivity between the two regions with time lags. This information was used to derive bloom and nonbloom probability matrices for each 1° square in the CS for the months of May-August, predicted from bloom conditions in a hotspot region in the CWA in February. A suite of standard statistical measures were used to gauge the prediction accuracy, among which the user's accuracy and kappa statistics showed high fidelity of the probability maps in predicting both blooms and nonblooms in the eastern CS with several months of lead time, with overall accuracy often exceeding 80%. The bloom probability maps from this hindcast analysis will provide early warnings to better study Sargassum blooms and prepare for beaching events near the study region. This approach may also be extendable to many other regions around the world that face similar challenges and opportunities of macroalgal blooms and beaching events.
Gene Identification Algorithms Using Exploratory Statistical Analysis of Periodicity
NASA Astrophysics Data System (ADS)
Mukherjee, Shashi Bajaj; Sen, Pradip Kumar
2010-10-01
Studying periodic pattern is expected as a standard line of attack for recognizing DNA sequence in identification of gene and similar problems. But peculiarly very little significant work is done in this direction. This paper studies statistical properties of DNA sequences of complete genome using a new technique. A DNA sequence is converted to a numeric sequence using various types of mappings and standard Fourier technique is applied to study the periodicity. Distinct statistical behaviour of periodicity parameters is found in coding and non-coding sequences, which can be used to distinguish between these parts. Here DNA sequences of Drosophila melanogaster were analyzed with significant accuracy.
Changing the Production Pipeline - Use of Oblique Aerial Cameras for Mapping Purposes
NASA Astrophysics Data System (ADS)
Moe, K.; Toschi, I.; Poli, D.; Lago, F.; Schreiner, C.; Legat, K.; Remondino, F.
2016-06-01
This paper discusses the potential of current photogrammetric multi-head oblique cameras, such as UltraCam Osprey, to improve the efficiency of standard photogrammetric methods for surveying applications like inventory surveys and topographic mapping for public administrations or private customers. In 2015, Terra Messflug (TM), a subsidiary of Vermessung AVT ZT GmbH (Imst, Austria), has flown a number of urban areas in Austria, Czech Republic and Hungary with an UltraCam Osprey Prime multi-head camera system from Vexcel Imaging. In collaboration with FBK Trento (Italy), the data acquired at Imst (a small town in Tyrol, Austria) were analysed and processed to extract precise 3D topographic information. The Imst block comprises 780 images and covers an area of approx. 4.5 km by 1.5 km. Ground truth data is provided in the form of 6 GCPs and several check points surveyed with RTK GNSS. Besides, 3D building data obtained by photogrammetric stereo plotting from a 5 cm nadir flight and a LiDAR point cloud with 10 to 20 measurements per m² are available as reference data or for comparison. The photogrammetric workflow, from flight planning to Dense Image Matching (DIM) and 3D building extraction, is described together with the achieved accuracy. For each step, the differences and innovation with respect to standard photogrammetric procedures based on nadir images are shown, including high overlaps, improved vertical accuracy, and visibility of areas masked in the standard vertical views. Finally the advantages of using oblique images for inventory surveys are demonstrated.
Okur, Aylin; Kantarcı, Mecit; Kızrak, Yeşim; Yıldız, Sema; Pirimoğlu, Berhan; Karaca, Leyla; Oğul, Hayri; Sevimli, Serdar
2014-01-01
PURPOSE We aimed to use a noninvasive method for quantifying T1 values of chronic myocardial infarction scar by cardiac magnetic resonance imaging (MRI), and determine its diagnostic performance. MATERIALS AND METHODS We performed cardiac MRI on 29 consecutive patients with known coronary artery disease (CAD) on 3.0 Tesla MRI scanner. An unenhanced T1 mapping technique was used to calculate T1 relaxation time of myocardial scar tissue, and its diagnostic performance was evaluated. Chronic scar tissue was identified by delayed contrast-enhancement (DE) MRI and T2-weighted images. Sensitivity, specificity, and accuracy values were calculated for T1 mapping using DE images as the gold standard. RESULTS Four hundred and forty-two segments were analyzed in 26 patients. While myocardial chronic scar was demonstrated in 45 segments on DE images, T1 mapping MRI showed a chronic scar area in 54 segments. T1 relaxation time was higher in chronic scar tissue, compared with remote areas (1314±98 ms vs. 1099±90 ms, P < 0.001). Therefore, increased T1 values were shown in areas of myocardium colocalized with areas of DE and normal signal on T2-weighted images. There was a significant correlation between T1 mapping and DE images in evaluation of myocardial wall injury extent (P < 0.05). We calculated sensitivity, specificity, and accuracy as 95.5%, 97%, and 96%, respectively. CONCLUSION The results of the present study reveal that T1 mapping MRI combined with T2-weighted images might be a feasible imaging modality for detecting chronic myocardial infarction scar tissue. PMID:25010366
Lidar on small UAV for 3D mapping
NASA Astrophysics Data System (ADS)
Tulldahl, H. Michael; Larsson, Hâkan
2014-10-01
Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the capability for target surface reflectivity estimation based on measurements on calibration standards. Initial results of the general mapping capability including the detection through partly obscured environments is demonstrated through field data collection and analysis.
NASA Technical Reports Server (NTRS)
Spann, G. W.; Faust, N. L.
1974-01-01
It is known from several previous investigations that many categories of land-use can be mapped via computer processing of Earth Resources Technology Satellite data. The results are presented of one such experiment using the USGS/NASA land-use classification system. Douglas County, Georgia, was chosen as the test site for this project. It was chosen primarily because of its recent rapid growth and future growth potential. Results of the investigation indicate an overall land-use mapping accuracy of 67% with higher accuracies in rural areas and lower accuracies in urban areas. It is estimated, however, that 95% of the State of Georgia could be mapped by these techniques with an accuracy of 80% to 90%.
Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations
Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.
2018-01-01
Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652
NASA Astrophysics Data System (ADS)
Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton
2014-08-01
Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
Airborne Navigation Remote Map Reader Evaluation.
1986-03-01
EVALUATION ( James C. Byrd Intergrated Controls/Displays Branch SAvionics Systems Division Directorate of Avionics Engineering SMarch 1986 Final Report...Resolution 15 3.2 Accuracy 15 3.3 Symbology 15 3.4 Video Standard 18 3.5 Simulator Control Box 18 3.6 Software 18 3.7 Display Performance 21 3.8 Reliability 24...can be selected depending on the detail required and will automatically be presented at his present position. .The French RMR uses a Flying Spot Scanner
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Wang, Miaomiao; Li, Bofeng
2016-01-01
An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS) processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing their slant wet delays with those measured by water vapor radiometer based on its satellite-tracking function (collected data with large liquid water path is removed). For all 15 combinations of three tropospheric models and five mapping functions, their accuracies as a function of elevation are statistically analyzed by using nine-day data in two scenarios, with and without meteorological data. The results show that (1) no matter with or without meteorological data, there is no practical difference between mapping functions, i.e., Chao, Ifadis, Vienna Mapping Function 1 (VMF1), Niell Mapping Function (NMF), and MTT Mapping Function (MTT); (2) without meteorological data, the UNB3 is much better than Saastamoinen and Hopfield models, while the Saastamoinen model performed slightly better than the Hopfield model; (3) with meteorological data, the accuracies of all three tropospheric delay models are improved to be comparable, especially for lower elevations. In addition, the kinematic precise point positioning where no parameter is set up for tropospheric delay modification is conducted to further evaluate the performance of tropospheric delay models in positioning accuracy. It is shown that the UNB3 model is best and can achieve about 10 cm accuracy for the N and E coordinate component while 20 cm accuracy for the U coordinate component no matter the meteorological data is available or not. This accuracy can be obtained by the Saastamoinen model only when meteorological data is available, and degraded to 46 cm for the U component if the meteorological data is not available. PMID:26848662
Lugauer, Felix; Wetzl, Jens; Forman, Christoph; Schneider, Manuel; Kiefer, Berthold; Hornegger, Joachim; Nickel, Dominik; Maier, Andreas
2018-06-01
Our aim was to develop and validate a 3D Cartesian Look-Locker [Formula: see text] mapping technique that achieves high accuracy and whole-liver coverage within a single breath-hold. The proposed method combines sparse Cartesian sampling based on a spatiotemporally incoherent Poisson pattern and k-space segmentation, dedicated for high-temporal-resolution imaging. This combination allows capturing tissue with short relaxation times with volumetric coverage. A joint reconstruction of the 3D + inversion time (TI) data via compressed sensing exploits the spatiotemporal sparsity and ensures consistent quality for the subsequent multistep [Formula: see text] mapping. Data from the National Institute of Standards and Technology (NIST) phantom and 11 volunteers, along with reference 2D Look-Locker acquisitions, are used for validation. 2D and 3D methods are compared based on [Formula: see text] values in different abdominal tissues at 1.5 and 3 T. [Formula: see text] maps obtained from the proposed 3D method compare favorably with those from the 2D reference and additionally allow for reformatting or volumetric analysis. Excellent agreement is shown in phantom [bias[Formula: see text] < 2%, bias[Formula: see text] < 5% for (120; 2000) ms] and volunteer data (3D and 2D deviation < 4% for liver, muscle, and spleen) for clinically acceptable scan (20 s) and reconstruction times (< 4 min). Whole-liver [Formula: see text] mapping with high accuracy and precision is feasible in one breath-hold using spatiotemporally incoherent, sparse 3D Cartesian sampling.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.
Song, Sunbin; Luby, Marie; Edwardson, Matthew A.; Brown, Tyler; Shah, Shreyansh; Cox, Robert W.; Saad, Ziad S.; Reynolds, Richard C.; Glen, Daniel R.; Cohen, Leonardo G.; Latour, Lawrence L.
2017-01-01
Introduction Interpretation of the extent of perfusion deficits in stroke MRI is highly dependent on the method used for analyzing the perfusion-weighted signal intensity time-series after gadolinium injection. In this study, we introduce a new model-free standardized method of temporal similarity perfusion (TSP) mapping for perfusion deficit detection and test its ability and reliability in acute ischemia. Materials and methods Forty patients with an ischemic stroke or transient ischemic attack were included. Two blinded readers compared real-time generated interactive maps and automatically generated TSP maps to traditional TTP/MTT maps for presence of perfusion deficits. Lesion volumes were compared for volumetric inter-rater reliability, spatial concordance between perfusion deficits and healthy tissue and contrast-to-noise ratio (CNR). Results Perfusion deficits were correctly detected in all patients with acute ischemia. Inter-rater reliability was higher for TSP when compared to TTP/MTT maps and there was a high similarity between the lesion volumes depicted on TSP and TTP/MTT (r(18) = 0.73). The Pearson's correlation between lesions calculated on TSP and traditional maps was high (r(18) = 0.73, p<0.0003), however the effective CNR was greater for TSP compared to TTP (352.3 vs 283.5, t(19) = 2.6, p<0.03.) and MTT (228.3, t(19) = 2.8, p<0.03). Discussion TSP maps provide a reliable and robust model-free method for accurate perfusion deficit detection and improve lesion delineation compared to traditional methods. This simple method is also computationally faster and more easily automated than model-based methods. This method can potentially improve the speed and accuracy in perfusion deficit detection for acute stroke treatment and clinical trial inclusion decision-making. PMID:28973000
Accuracy Performance Evaluation of Beidou Navigation Satellite System
NASA Astrophysics Data System (ADS)
Wang, W.; Hu, Y. N.
2017-03-01
Accuracy is one of the key elements of the regional Beidou Navigation Satellite System (BDS) performance standard. In this paper, we review the definition specification and evaluation standard of the BDS accuracy. Current accuracy of the regional BDS is analyzed through the ground measurements and compared with GPS in terms of dilution of precision (DOP), signal-in-space user range error (SIS URE), and positioning accuracy. The Positioning DOP (PDOP) map of BDS around Chinese mainland is compared with that of GPS. The GPS PDOP is between 1.0-2.0 and does not vary with the user latitude and longitude, while the BDS PDOP varies between 1.5-5.0, and increases as the user latitude increases, and as the user longitude apart from 118°. The accuracies of the broadcast orbits of BDS are assessed by taking the precise orbits from International GNSS Service (IGS) as the reference, and by making satellite laser ranging (SLR) residuals. The radial errors of the BDS inclined geosynchronous orbit (IGSO) and medium orbit (MEO) satellites broadcast orbits are at the 0.5m level, which are larger than those of GPS satellites at the 0.2m level. The SLR residuals of geosynchronous orbit (GEO) satellites are 65.0cm, which are larger than those of IGSO, and MEO satellites, at the 50.0cm level. The accuracy of broadcast clock offset parameters of BDS is computed by taking the clock measurements of Two-way Satellite Radio Time Frequency Transfer as the reference. Affected by the age of broadcast clock parameters, the error of the broadcast clock offset parameters of the MEO satellites is the largest, at the 0.80m level. Finally, measurements of the multi-GNSS (MGEX) receivers are used for positioning accuracy assessment of BDS and GPS. It is concluded that the positioning accuracy of regional BDS is better than 10m at the horizontal component and the vertical component. The combined positioning accuracy of both systems is better than one specific system.
Development of predictive mapping techniques for soil survey and salinity mapping
NASA Astrophysics Data System (ADS)
Elnaggar, Abdelhamid A.
Conventional soil maps represent a valuable source of information about soil characteristics, however they are subjective, very expensive, and time-consuming to prepare. Also, they do not include explicit information about the conceptual mental model used in developing them nor information about their accuracy, in addition to the error associated with them. Decision tree analysis (DTA) was successfully used in retrieving the expert knowledge embedded in old soil survey data. This knowledge was efficiently used in developing predictive soil maps for the study areas in Benton and Malheur Counties, Oregon and accessing their consistency. A retrieved soil-landscape model from a reference area in Harney County was extrapolated to develop a preliminary soil map for the neighboring unmapped part of Malheur County. The developed map had a low prediction accuracy and only a few soil map units (SMUs) were predicted with significant accuracy, mostly those shallow SMUs that have either a lithic contact with the bedrock or developed on a duripan. On the other hand, the developed soil map based on field data was predicted with very high accuracy (overall was about 97%). Salt-affected areas of the Malheur County study area are indicated by their high spectral reflectance and they are easily discriminated from the remote sensing data. However, remote sensing data fails to distinguish between the different classes of soil salinity. Using the DTA method, five classes of soil salinity were successfully predicted with an overall accuracy of about 99%. Moreover, the calculated area of salt-affected soil was overestimated when mapped using remote sensing data compared to that predicted by using DTA. Hence, DTA could be a very helpful approach in developing soil survey and soil salinity maps in more objective, effective, less-expensive and quicker ways based on field data.
Wang, Hubiao; Wu, Lin; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-08-10
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°-145° E, 0°-40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China's Western Pacific area is ~1.0-4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy.
Wang, Hubiao; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-01-01
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°–145° E, 0°–40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China’s Western Pacific area is ~1.0–4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy. PMID:28796158
Simultaneous Quantitative MRI Mapping of T1, T2* and Magnetic Susceptibility with Multi-Echo MP2RAGE
Kober, Tobias; Möller, Harald E.; Schäfer, Andreas
2017-01-01
The knowledge of relaxation times is essential for understanding the biophysical mechanisms underlying contrast in magnetic resonance imaging. Quantitative experiments, while offering major advantages in terms of reproducibility, may benefit from simultaneous acquisitions. In this work, we demonstrate the possibility of simultaneously recording relaxation-time and susceptibility maps with a prototype Multi-Echo (ME) Magnetization-Prepared 2 RApid Gradient Echoes (MP2RAGE) sequence. T1 maps can be obtained using the MP2RAGE sequence, which is relatively insensitive to inhomogeneities of the radio-frequency transmit field, B1+. As an extension, multiple gradient echoes can be acquired in each of the MP2RAGE readout blocks, which permits the calculation of T2* and susceptibility maps. We used computer simulations to explore the effects of the parameters on the precision and accuracy of the mapping. In vivo parameter maps up to 0.6 mm nominal resolution were acquired at 7 T in 19 healthy volunteers. Voxel-by-voxel correlations and the test-retest reproducibility were used to assess the reliability of the results. When using optimized paramenters, T1 maps obtained with ME-MP2RAGE and standard MP2RAGE showed excellent agreement for the whole range of values found in brain tissues. Simultaneously obtained T2* and susceptibility maps were of comparable quality as Fast Low-Angle SHot (FLASH) results. The acquisition times were more favorable for the ME-MP2RAGE (≈ 19 min) sequence as opposed to the sum of MP2RAGE (≈ 12 min) and FLASH (≈ 10 min) acquisitions. Without relevant sacrifice in accuracy, precision or flexibility, the multi-echo version may yield advantages in terms of reduced acquisition time and intrinsic co-registration, provided that an appropriate optimization of the acquisition parameters is performed. PMID:28081157
Improving BeiDou real-time precise point positioning with numerical weather models
NASA Astrophysics Data System (ADS)
Lu, Cuixian; Li, Xingxing; Zus, Florian; Heinkelmann, Robert; Dick, Galina; Ge, Maorong; Wickert, Jens; Schuh, Harald
2017-09-01
Precise positioning with the current Chinese BeiDou Navigation Satellite System is proven to be of comparable accuracy to the Global Positioning System, which is at centimeter level for the horizontal components and sub-decimeter level for the vertical component. But the BeiDou precise point positioning (PPP) shows its limitation in requiring a relatively long convergence time. In this study, we develop a numerical weather model (NWM) augmented PPP processing algorithm to improve BeiDou precise positioning. Tropospheric delay parameters, i.e., zenith delays, mapping functions, and horizontal delay gradients, derived from short-range forecasts from the Global Forecast System of the National Centers for Environmental Prediction (NCEP) are applied into BeiDou real-time PPP. Observational data from stations that are capable of tracking the BeiDou constellation from the International GNSS Service (IGS) Multi-GNSS Experiments network are processed, with the introduced NWM-augmented PPP and the standard PPP processing. The accuracy of tropospheric delays derived from NCEP is assessed against with the IGS final tropospheric delay products. The positioning results show that an improvement in convergence time up to 60.0 and 66.7% for the east and vertical components, respectively, can be achieved with the NWM-augmented PPP solution compared to the standard PPP solutions, while only slight improvement in the solution convergence can be found for the north component. A positioning accuracy of 5.7 and 5.9 cm for the east component is achieved with the standard PPP that estimates gradients and the one that estimates no gradients, respectively, in comparison to 3.5 cm of the NWM-augmented PPP, showing an improvement of 38.6 and 40.1%. Compared to the accuracy of 3.7 and 4.1 cm for the north component derived from the two standard PPP solutions, the one of the NWM-augmented PPP solution is improved to 2.0 cm, by about 45.9 and 51.2%. The positioning accuracy for the up component improves from 11.4 and 13.2 cm with the two standard PPP solutions to 8.0 cm with the NWM-augmented PPP solution, an improvement of 29.8 and 39.4%, respectively.
NASA Technical Reports Server (NTRS)
Hall, D. K.; Foster, J. L.; Salomonson, V. V.; Klein, A. G.; Chien, J. Y. L.
1998-01-01
Following the launch of the Earth Observing System first morning (EOS-AM1) satellite, daily, global snow-cover mapping will be performed automatically at a spatial resolution of 500 m, cloud-cover permitting, using Moderate Resolution Imaging Spectroradiometer (MODIS) data. A technique to calculate theoretical accuracy of the MODIS-derived snow maps is presented. Field studies demonstrate that under cloud-free conditions when snow cover is complete, snow-mapping errors are small (less than 1%) in all land covers studied except forests where errors are greater and more variable. The theoretical accuracy of MODIS snow-cover maps is largely determined by percent forest cover north of the snowline. Using the 17-class International Geosphere-Biosphere Program (IGBP) land-cover maps of North America and Eurasia, the Northern Hemisphere is classified into seven land-cover classes and water. Snow-mapping errors estimated for each of the seven land-cover classes are extrapolated to the entire Northern Hemisphere for areas north of the average continental snowline for each month. Average monthly errors for the Northern Hemisphere are expected to range from 5 - 10%, and the theoretical accuracy of the future global snow-cover maps is 92% or higher. Error estimates will be refined after the first full year that MODIS data are available.
Coniferous forest classification and inventory using Landsat and digital terrain data
NASA Technical Reports Server (NTRS)
Franklin, J.; Logan, T. L.; Woodcock, C. E.; Strahler, A. H.
1986-01-01
Machine-processing techniques were used in a Forest Classification and Inventory System (FOCIS) procedure to extract and process tonal, textural, and terrain information from registered Landsat multispectral and digital terrain data. Using FOCIS as a basis for stratified sampling, the softwood timber volumes of the Klamath National Forest and Eldorado National Forest were estimated within standard errors of 4.8 and 4.0 percent, respectively. The accuracy of these large-area inventories is comparable to the accuracy yielded by use of conventional timber inventory methods, but, because of automation, the FOCIS inventories are more rapid (9-12 months compared to 2-3 years for conventional manual photointerpretation, map compilation and drafting, field sampling, and data processing) and are less costly.
Landenburger, L.; Lawrence, R.L.; Podruzny, S.; Schwartz, C.C.
2008-01-01
Moderate resolution satellite imagery traditionally has been thought to be inadequate for mapping vegetation at the species level. This has made comprehensive mapping of regional distributions of sensitive species, such as whitebark pine, either impractical or extremely time consuming. We sought to determine whether using a combination of moderate resolution satellite imagery (Landsat Enhanced Thematic Mapper Plus), extensive stand data collected by land management agencies for other purposes, and modern statistical classification techniques (boosted classification trees) could result in successful mapping of whitebark pine. Overall classification accuracies exceeded 90%, with similar individual class accuracies. Accuracies on a localized basis varied based on elevation. Accuracies also varied among administrative units, although we were not able to determine whether these differences related to inherent spatial variations or differences in the quality of available reference data.
Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry
NASA Astrophysics Data System (ADS)
Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold
2017-03-01
Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.
NASA Astrophysics Data System (ADS)
B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel
2015-07-01
The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging.
in Mapping of Gastric Cancer Incidence in Iran
Asmarian, Naeimehossadat; Jafari-Koshki, Tohid; Soleimani, Ali; Taghi Ayatollahi, Seyyed Mohammad
2016-10-01
Background: In many countries gastric cancer has the highest incidence among the gastrointestinal cancers and is the second most common cancer in Iran. The aim of this study was to identify and map high risk gastric cancer regions at the county-level in Iran. Methods: In this study we analyzed gastric cancer data for Iran in the years 2003-2010. Areato- area Poisson kriging and Besag, York and Mollie (BYM) spatial models were applied to smoothing the standardized incidence ratios of gastric cancer for the 373 counties surveyed in this study. The two methods were compared in term of accuracy and precision in identifying high risk regions. Result: The highest smoothed standardized incidence rate (SIR) according to area-to-area Poisson kriging was in Meshkinshahr county in Ardabil province in north-western Iran (2.4,SD=0.05), while the highest smoothed standardized incidence rate (SIR) according to the BYM model was in Ardabil, the capital of that province (2.9,SD=0.09). Conclusion: Both methods of mapping, ATA Poisson kriging and BYM, showed the gastric cancer incidence rate to be highest in north and north-west Iran. However, area-to-area Poisson kriging was more precise than the BYM model and required less smoothing. According to the results obtained, preventive measures and treatment programs should be focused on particular counties of Iran. Creative Commons Attribution License
Mapping stand-age distribution of Russian forests from satellite data
NASA Astrophysics Data System (ADS)
Chen, D.; Loboda, T. V.; Hall, A.; Channan, S.; Weber, C. Y.
2013-12-01
Russian boreal forest is a critical component of the global boreal biome as approximately two thirds of the boreal forest is located in Russia. Numerous studies have shown that wildfire and logging have led to extensive modifications of forest cover in the region since 2000. Forest disturbance and subsequent regrowth influences carbon and energy budgets and, in turn, affect climate. Several global and regional satellite-based data products have been developed from coarse (>100m) and moderate (10-100m) resolution imagery to monitor forest cover change over the past decade, record of forest cover change pre-dating year 2000 is very fragmented. Although by using stacks of Landsat images, some information regarding the past disturbances can be obtained, the quantity and locations of such stacks with sufficient number of images are extremely limited, especially in Eastern Siberia. This paper describes a modified method which is built upon previous work to hindcast the disturbance history and map stand-age distribution in the Russian boreal forest. Utilizing data from both Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS), a wall-to-wall map indicating the estimated age of forest in the Russian boreal forest is created. Our previous work has shown that disturbances can be mapped successfully up to 30 years in the past as the spectral signature of regrowing forests is statistically significantly different from that of mature forests. The presented algorithm ingests 55 multi-temporal stacks of Landsat imagery available over Russian forest before 2001 and processes through a standardized and semi-automated approach to extract training and validation data samples. Landsat data, dating back to 1984, are used to generate maps of forest disturbance using temporal shifts in Disturbance Index through the multi-temporal stack of imagery in selected locations. These maps are then used as reference data to train a decision tree classifier on 50 MODIS-based indices. The resultant map provides an estimate of forest age based on the regrowth curves observed from Landsat imagery. The accuracy of the resultant map is assessed against three datasets: 1) subset of the disturbance maps developed within the algorithm, 2) independent disturbance maps created by the Northern Eurasia Land Dynamics Analysis (NELDA) project, and 3) field-based stand-age distribution from forestry inventory units. The current version of the product presents a considerable improvement on the previous version which used Landsat data samples at a set of randomly selected locations, resulting a strong bias of the training samples towards the Landsat-rich regions (e.g. European Russia) whereas regions such as Siberia were under-sampled. Aiming at improving accuracy, the current method significantly increases the number of training Landsat samples compared to the previous work. Aside from the previously used data, the current method uses all available Landsat data for the under-sampled regions in order to increase the representativeness of the total samples. The finial accuracy assessment is still ongoing, however, the initial results suggested an overall accuracy expressed in Kappa > 0.8. We plan to release both the training data and the final disturbance map of the Russian boreal forest to the public after the validation is completed.
He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui
2015-01-01
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194
Application of AIS Technology to Forest Mapping
NASA Technical Reports Server (NTRS)
Yool, S. R.; Star, J. L.
1985-01-01
Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.
Design and analysis for thematic map accuracy assessment: Fundamental principles
Stephen V. Stehman; Raymond L. Czaplewski
1998-01-01
Land-cover maps are used in numerous natural resource applications to describe the spatial distribution and pattern of land-cover, to estimate areal extent of various cover classes, or as input into habitat suitability models, land-cover change analyses, hydrological models, and risk analyses. Accuracy assessment quantifies data quality so that map users may evaluate...
Efficient method for computing the electronic transport properties of a multiterminal system
NASA Astrophysics Data System (ADS)
Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio
2018-04-01
We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Intelligent Mortality Reporting with FHIR
Hoffman, Ryan A.; Wu, Hang; Venugopalan, Janani; Braun, Paula; Wang, May D.
2017-01-01
One pressing need in the area of public health is timely, accurate, and complete reporting of deaths and the conditions leading up to them. Fast Healthcare Interoperability Resources (FHIR) is a new HL7 interoperability standard for electronic health record (EHR), while Sustainable Medical Applications and Reusable Technologies (SMART)-on-FHIR enables third-party app development that can work “out of the box”. This research demonstrates the feasibility of developing SMART-on-FHIR applications to enable medical professionals to perform timely and accurate death reporting within multiple different jurisdictions of US. We explored how the information on a standard certificate of death can be mapped to resources defined in the FHIR standard (DSTU2). We also demonstrated analytics for potentially improving the accuracy and completeness of mortality reporting data. PMID:28804791
Quantitative Electron Probe Microanalysis: State of the Art
NASA Technical Reports Server (NTRS)
Carpernter, P. K.
2005-01-01
Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.
Mapping land use changes in the carboniferous region of Santa Catarina, report 2
NASA Technical Reports Server (NTRS)
Valeriano, D. D. (Principal Investigator); Bitencourtpereira, M. D.
1983-01-01
The techniques applied to MSS-LANDSAT data in the land-use mapping of Criciuma region (Santa Catarina state, Brazil) are presented along with the results of a classification accuracy estimate tested on the resulting map. The MSS-LANDSAT data digital processing involves noise suppression, features selection and a hybrid classifier. The accuracy test is made through comparisons with aerial photographs of sampled points. The utilization of digital processing to map the classes agricultural lands, forest lands and urban areas is recommended, while the coal refuse areas should be mapped visually.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Weingärtner, Sebastian; Meßner, Nadja M; Zöllner, Frank G; Akçakaya, Mehmet; Schad, Lothar R
2017-08-01
To study the feasibility of black-blood contrast in native T 1 mapping for reduction of partial voluming at the blood-myocardium interface. A saturation pulse prepared heart-rate-independent inversion recovery (SAPPHIRE) T 1 mapping sequence was combined with motion-sensitized driven-equilibrium (MSDE) blood suppression for black-blood T 1 mapping at 3 Tesla. Phantom scans were performed to assess the T 1 time accuracy. In vivo black-blood and conventional SAPPHIRE T 1 mapping was performed in eight healthy subjects and analyzed for T 1 times, precision, and inter- and intraobserver variability. Furthermore, manually drawn regions of interest (ROIs) in all T 1 maps were dilated and eroded to analyze the dependence of septal T 1 times on the ROI thickness. Phantom results and in vivo myocardial T 1 times show comparable accuracy with black-blood compared to conventional SAPPHIRE (in vivo: black-blood: 1562 ± 56 ms vs. conventional: 1583 ± 58 ms, P = 0.20); Using black-blood SAPPHIRE precision was significantly lower (standard deviation: 133.9 ± 24.6 ms vs. 63.1 ± 6.4 ms, P < .0001), and blood T 1 time measurement was not possible. Significantly increased interobserver interclass correlation coefficient (ICC) (0.996 vs. 0.967, P = 0.011) and similar intraobserver ICC (0.979 vs. 0.939, P = 0.11) was obtained with the black-blood sequence. Conventional SAPPHIRE showed strong dependence on the ROI thickness (R 2 = 0.99). No such trend was observed using the black-blood approach (R 2 = 0.29). Black-blood SAPPHIRE successfully eliminates partial voluming at the blood pool in native myocardial T 1 mapping while providing accurate T 1 times, albeit at a reduced precision. Magn Reson Med 78:484-493, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Tang, Wei; Liao, Mingsheng; Zhang, Lu; Li, Wei; Yu, Weimin
2016-09-01
A high spatial and temporal resolution of the precipitable water vapour (PWV) in the atmosphere is a key requirement for the short-scale weather forecasting and climate research. The aim of this work is to derive temporally differenced maps of the spatial distribution of PWV by analysing the tropospheric delay "noise" in interferometric synthetic aperture radar (InSAR). Time series maps of differential PWV were obtained by processing a set of ENVISAT ASAR (Advanced Synthetic Aperture Radar) images covering the area of southern California, USA from 6 October 2007 to 29 November 2008. To get a more accurate PWV, the component of hydrostatic delay was calculated and subtracted by using ERA-Interim reanalysis products. In addition, the ERA-Interim was used to compute the conversion factors required to convert the zenith wet delay to water vapour. The InSAR-derived differential PWV maps were calibrated by means of the GPS PWV measurements over the study area. We validated our results against the measurements of PWV derived from the Medium Resolution Imaging Spectrometer (MERIS) which was located together with the ASAR sensor on board the ENVISAT satellite. Our comparative results show strong spatial correlations between the two data sets. The difference maps have Gaussian distributions with mean values close to zero and standard deviations below 2 mm. The advantage of the InSAR technique is that it provides water vapour distribution with a spatial resolution as fine as 20 m and an accuracy of ˜ 2 mm. Such high-spatial-resolution maps of PWV could lead to much greater accuracy in meteorological understanding and quantitative precipitation forecasts. With the launch of Sentinel-1A and Sentinel-1B satellites, every few days (6 days) new SAR images can be acquired with a wide swath up to 250 km, enabling a unique operational service for InSAR-based water vapour maps with unprecedented spatial and temporal resolution.
Kobler, Aaron; Kübel, Christian
2018-01-01
To relate the internal structure of a volume (crystallite and phase boundaries) to properties (electrical, magnetic, mechanical, thermal), a full 3D reconstruction in combination with in situ testing is desirable. In situ testing allows the crystallographic changes in a material to be followed by tracking and comparing the individual crystals and phases. Standard transmission electron microscopy (TEM) delivers a projection image through the 3D volume of an electron-transparent TEM sample lamella. Only with the help of a dedicated TEM tomography sample holder is an accurate 3D reconstruction of the TEM lamella currently possible. 2D crystal orientation mapping has become a standard method for crystal orientation and phase determination while 3D crystal orientation mapping have been reported only a few times. The combination of in situ testing with 3D crystal orientation mapping remains a challenge in terms of stability and accuracy. Here, we outline a method to 3D reconstruct the crystal orientation from a superimposed diffraction pattern of overlapping crystals without sample tilt. Avoiding the typically required tilt series for 3D reconstruction enables not only faster in situ tests but also opens the possibility for more stable and more accurate in situ mechanical testing. The approach laid out here should serve as an inspiration for further research and does not make a claim to be complete.
Ultra-high sensitivity moment magnetometry of geological samples using magnetic microscopy
NASA Astrophysics Data System (ADS)
Lima, Eduardo A.; Weiss, Benjamin P.
2016-09-01
Useful paleomagnetic information is expected to be recorded by samples with moments up to three orders of magnitude below the detection limit of standard superconducting rock magnetometers. Such samples are now detectable using recently developed magnetic microscopes, which map the magnetic fields above room-temperature samples with unprecedented spatial resolutions and field sensitivities. However, realizing this potential requires the development of techniques for retrieving sample moments from magnetic microscopy data. With this goal, we developed a technique for uniquely obtaining the net magnetic moment of geological samples from magnetic microscopy maps of unresolved or nearly unresolved magnetization. This technique is particularly powerful for analyzing small, weakly magnetized samples such as meteoritic chondrules and terrestrial silicate crystals like zircons. We validated this technique by applying it to field maps generated from synthetic sources and also to field maps measured using a superconducting quantum interference device (SQUID) microscope above geological samples with moments down to 10-15 Am2. For the most magnetic rock samples, the net moments estimated from the SQUID microscope data are within error of independent moment measurements acquired using lower sensitivity standard rock magnetometers. In addition to its superior moment sensitivity, SQUID microscope net moment magnetometry also enables the identification and isolation of magnetic contamination and background sources, which is critical for improving accuracy in paleomagnetic studies of weakly magnetic samples.
NASA Astrophysics Data System (ADS)
See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz
2016-04-01
The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID:26402522
Michael J. Falkowski; Paul Gessler; Penelope Morgan; Alistair M. S. Smith; Andrew T. Hudak
2004-01-01
Land managers need cost-effective methods for mapping and characterizing fire fuels quickly and accurately. The advent of sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the Advanced Spaceborne Thermal Emission and Reflection...
Evaluating the ASTER sensor for mapping and characterizing forest fire fuels in northern Idaho
Michael J. Falkowski; Paul Gessler; Penelope Morgan; Alistair M. S. Smith; Andrew T. Hudak
2004-01-01
Land managers need cost-effective methods for mapping and characterizing fire fuels quickly and accurately. The advent of sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the Advanced Spaceborne Thermal Emission and Reflection...
Characterizing and mapping forest fire fuels using ASTER imagery and gradient modeling
Michael J. Falkowski; Paul E. Gessler; Penelope Morgan; Andrew T. Hudak; Alistair M. S. Smith
2005-01-01
Land managers need cost-effective methods for mapping and characterizing forest fuels quickly and accurately. The launch of satellite sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the advanced spaceborne thermal emission and...
Automatic photointerpretation for land use management in Minnesota
NASA Technical Reports Server (NTRS)
Swanlund, G. D. (Principal Investigator); Kirvida, L.; Cheung, M.; Pile, D.; Zirkle, R.
1974-01-01
The author has identified the following significant results. Automatic photointerpretation techniques were utilized to evaluate the feasibility of data for land use management. It was shown that ERTS-1 MSS data can produce thematic maps of adequate resolution and accuracy to update land use maps. In particular, five typical land use areas were mapped with classification accuracies ranging from 77% to over 90%.
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.
Inventory and analysis of rangeland resources of the state land block on Parker Mountain, Utah
NASA Technical Reports Server (NTRS)
Jaynes, R. A. (Principal Investigator)
1983-01-01
High altitude color infrared (CIR) photography was interpreted to provide an 1:24,000 overlay to U.S.G.S. topographic maps. The inventory and analysis of rangeland resources was augmented by the digital analysis of LANDSAT MSS data. Available geology, soils, and precipitation maps were used to sort out areas of confusion on the CIR photography. The map overlay from photo interpretation was also prepared with reference to print maps developed from LANDSAT MSS data. The resulting map overlay has a high degree of interpretive and spatial accuracy. An unacceptable level of confusion between the several sagebrush types in the MSS mapping was largely corrected by introducing ancillary data. Boundaries from geology, soils, and precipitation maps, as well as field observations, were digitized and pixel classes were adjusted according to the location of pixels with particular spectral signatures with respect to such boundaries. The resulting map, with six major cover classes, has an overall accuracy of 89%. Overall accuracy was 74% when these six classes were expanded to 20 classes.
Clinical data integration of distributed data sources using Health Level Seven (HL7) v3-RIM mapping
2011-01-01
Background Health information exchange and health information integration has become one of the top priorities for healthcare systems across institutions and hospitals. Most organizations and establishments implement health information exchange and integration in order to support meaningful information retrieval among their disparate healthcare systems. The challenges that prevent efficient health information integration for heterogeneous data sources are the lack of a common standard to support mapping across distributed data sources and the numerous and diverse healthcare domains. Health Level Seven (HL7) is a standards development organization which creates standards, but is itself not the standard. They create the Reference Information Model. RIM is developed by HL7's technical committees. It is a standardized abstract representation of HL7 data across all the domains of health care. In this article, we aim to present a design and a prototype implementation of HL7 v3-RIM mapping for information integration of distributed clinical data sources. The implementation enables the user to retrieve and search information that has been integrated using HL7 v3-RIM technology from disparate health care systems. Method and results We designed and developed a prototype implementation of HL7 v3-RIM mapping function to integrate distributed clinical data sources using R-MIM classes from HL7 v3-RIM as a global view along with a collaborative centralized web-based mapping tool to tackle the evolution of both global and local schemas. Our prototype was implemented and integrated with a Clinical Database management Systems CDMS as a plug-in module. We tested the prototype system with some use case scenarios for distributed clinical data sources across several legacy CDMS. The results have been effective in improving information delivery, completing tasks that would have been otherwise difficult to accomplish, and reducing the time required to finish tasks which are used in collaborative information retrieval and sharing with other systems. Conclusions We created a prototype implementation of HL7 v3-RIM mapping for information integration between distributed clinical data sources to promote collaborative healthcare and translational research. The prototype has effectively and efficiently ensured the accuracy of the information and knowledge extractions for systems that have been integrated PMID:22104558
National Park Service Vegetation Inventory Program, Cuyahoga Valley National Park, Ohio
Hop, Kevin D.; Drake, J.; Strassman, Andrew C.; Hoy, Erin E.; Menard, Shannon; Jakusz, J.W.; Dieck, J.J.
2013-01-01
The National Park Service (NPS) Vegetation Inventory Program (VIP) is an effort to classify, describe, and map existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VIP is managed by the NPS Biological Resources Management Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey (USGS) Vegetation Characterization Program lends a cooperative role in the NPS VIP. The USGS Upper Midwest Environmental Sciences Center, NatureServe, and NPS Cuyahoga Valley National Park (CUVA) have completed vegetation classification and mapping of CUVA.Mappers, ecologists, and botanists collaborated to identify and describe vegetation types within the National Vegetation Classification Standard (NVCS) and to determine how best to map them by using aerial imagery. The team collected data from 221 vegetation plots within CUVA to develop detailed descriptions of vegetation types. Data from 50 verification sites were also collected to test both the key to vegetation types and the application of vegetation types to a sample set of map polygons. Furthermore, data from 647 accuracy assessment (AA) sites were collected (of which 643 were used to test accuracy of the vegetation map layer). These data sets led to the identification of 45 vegetation types at the association level in the NVCS at CUVA.A total of 44 map classes were developed to map the vegetation and general land cover of CUVA, including the following: 29 map classes represent natural/semi-natural vegetation types in the NVCS, 12 map classes represent cultural vegetation (agricultural and developed) in the NVCS, and 3 map classes represent non-vegetation features (open-water bodies). Features were interpreted from viewing color-infrared digital aerial imagery dated October 2010 (during peak leaf-phenology change of trees) via digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems (GIS). The interpreted data were digitally and spatially referenced, thus making the spatial database layers usable in GIS. Polygon units were mapped to either a 0.5 ha or 0.25 ha minimum mapping unit, depending on vegetation type.A geodatabase containing various feature-class layers and tables shows the locations of vegetation types and general land cover (vegetation map), vegetation plot samples, verification sites, AA sites, project boundary extent, and aerial photographic centers. The feature-class layer and relate tables for the CUVA vegetation map provides 4,640 polygons of detailed attribute data covering 13,288.4 ha, with an average polygon size of 2.9 ha.Summary reports generated from the vegetation map layer show map classes representing natural/semi-natural types in the NVCS apply to 4,151 polygons (89.4% of polygons) and cover 11,225.0 ha (84.5%) of the map extent. Of these polygons, the map layer shows CUVA to be 74.4% forest (9,888.8 ha), 2.5% shrubland (329.7 ha), and 7.6% herbaceous vegetation cover (1,006.5 ha). Map classes representing cultural types in the NVCS apply to 435 polygons (9.4% of polygons) and cover 1,825.7 ha (13.7%) of the map extent. Map classes representing non-NVCS units (open water) apply to 54 polygons (1.2% of polygons) and cover 237.7 ha (1.8%) of the map extent.A thematic AA study was conducted of map classes representing natural/semi-natural types in the NVCS. Results present an overall accuracy of 80.7% (kappa index of 79.5%) based on data from 643 of the 647 AA sites. Most individual map-class themes exceed the NPS VIP standard of 80% with a 90% confidence interval.The CUVA vegetation mapping project delivers many geospatial and vegetation data products in hardcopy and/or digital formats. These products consist of an in-depth project report discussing methods and results, which include descriptions and a dichotomous key to vegetation types, map classification and map-class descriptions, and a contingency table showing AA results. The suite of products also includes a database of vegetation plots, verification sites, and AA sites; digital pictures of field sites; field data sheets; aerial photographic imagery; hardcopy and digital maps; and a geodatabase of vegetation types and land cover (map layer), fieldwork locations (vegetation plots, verification sites, and AA sites), aerial photographic index, project boundary, and metadata. All geospatial products are projected in Universal Transverse Mercator, Zone 17, by using the North American Datum of 1983. Information on the NPS VIP and completed park mapping projects are located on the Internet at
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination
NASA Technical Reports Server (NTRS)
Riggs, George A.; Hall, Dorothy K.
2010-01-01
Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.
Role of interoceptive accuracy in topographical changes in emotion-induced bodily sensations
Jung, Won-Mo; Ryu, Yeonhee; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung
2017-01-01
The emotion-associated bodily sensation map is composed of a specific topographical distribution of bodily sensations to categorical emotions. The present study investigated whether or not interoceptive accuracy was associated with topographical changes in this map following emotion-induced bodily sensations. This study included 31 participants who observed short video clips containing emotional stimuli and then reported their sensations on the body map. Interoceptive accuracy was evaluated with a heartbeat detection task and the spatial patterns of bodily sensations to specific emotions, including anger, fear, disgust, happiness, sadness, and neutral, were visualized using Statistical Parametric Mapping (SPM) analyses. Distinct patterns of bodily sensations were identified for different emotional states. In addition, positive correlations were found between the magnitude of sensation in emotion-specific regions and interoceptive accuracy across individuals. A greater degree of interoceptive accuracy was associated with more specific topographical changes after emotional stimuli. These results suggest that the awareness of one’s internal bodily states might play a crucial role as a required messenger of sensory information during the affective process. PMID:28877218
Performance Evaluation of Dsm Extraction from ZY-3 Three-Line Arrays Imagery
NASA Astrophysics Data System (ADS)
Xue, Y.; Xie, W.; Du, Q.; Sang, H.
2015-08-01
ZiYuan-3 (ZY-3), launched in January 09, 2012, is China's first civilian high-resolution stereo mapping satellite. ZY-3 is equipped with three-line scanners (nadir, backward and forward) for stereo mapping, the resolutions of the panchromatic (PAN) stereo mapping images are 2.1-m at nadir looking and 3.6-m at tilt angles of ±22° forward and backward looking, respectively. The stereo base-height ratio is 0.85-0.95. Compared with stereo mapping from two views images, three-line arrays images of ZY-3 can be used for DSM generation taking advantage of one more view than conventional photogrammetric methods. It would enrich the information for image matching and enhance the accuracy of DSM generated. The primary result of positioning accuracy of ZY-3 images has been reported, while before the massive mapping applications of utilizing ZY-3 images for DSM generation, the performance evaluation of DSM extraction from three-line arrays imagery of ZY-3 has significant meaning for the routine mapping applications. The goal of this research is to clarify the mapping performance of ZY-3 three-line arrays scanners on china's first civilian high-resolution stereo mapping satellite of ZY-3 through the accuracy evaluation of DSM generation. The comparison of DSM product in different topographic areas generated with three views images with different two views combination images of ZY-3 would be presented. Besides the comparison within different topographic study area, the accuracy deviation of the DSM products with different grid size including 25-m, 10-m and 5-m is delineated in order to clarify the impact of grid size on accuracy evaluation.
Standardized Digital Colposcopy with Dynamic Spectral Imaging for Conservative Patient Management.
Kaufmann, Angelika; Founta, Christina; Papagiannakis, Emmanouil; Naik, Raj; Fisher, Ann
2017-01-01
Colposcopy is subjective and management of young patients with high-grade disease is challenging, as treatments may impair subsequent pregnancies and adversely affect obstetric outcomes. Conservative management of selected patients is becoming more popular amongst clinicians; however it requires accurate assessment and documentation. Novel adjunctive technologies for colposcopy could improve patient care and help individualize management decisions by introducing standardization, increasing sensitivity, and improving documentation. A nulliparous 27-year-old woman planning pregnancy underwent colposcopy following high-grade cytology. The colposcopic impression was of low-grade changes, whilst the Dynamic Spectral Imaging (DSI) map of the cervix suggested potential high-grade. A DSI-directed biopsy confirmed CIN2. At follow-up, both colposcopy and DSI were suggestive of low-grade disease only, and image comparison confirmed the absence of previously present acetowhite epithelium areas. Histology of the transformation zone following excisional treatment, as per patient's choice, showed no high-grade changes. Digital colposcopy with DSI mapping helps standardize colposcopic examinations, increase diagnostic accuracy, and monitor cervical changes over time, improving patient care. When used for longitudinal tracking of disease and when it confirms a negative colposcopy, it can help towards avoiding overtreatment and hence decrease morbidity related to cervical excision.
Möller, Christiane; Pijnenburg, Yolande A L; van der Flier, Wiesje M; Versteeg, Adriaan; Tijms, Betty; de Munck, Jan C; Hafkemeijer, Anne; Rombouts, Serge A R B; van der Grond, Jeroen; van Swieten, John; Dopper, Elise; Scheltens, Philip; Barkhof, Frederik; Vrenken, Hugo; Wink, Alle Meije
2016-06-01
Purpose To investigate the diagnostic accuracy of an image-based classifier to distinguish between Alzheimer disease (AD) and behavioral variant frontotemporal dementia (bvFTD) in individual patients by using gray matter (GM) density maps computed from standard T1-weighted structural images obtained with multiple imagers and with independent training and prediction data. Materials and Methods The local institutional review board approved the study. Eighty-four patients with AD, 51 patients with bvFTD, and 94 control subjects were divided into independent training (n = 115) and prediction (n = 114) sets with identical diagnosis and imager type distributions. Training of a support vector machine (SVM) classifier used diagnostic status and GM density maps and produced voxelwise discrimination maps. Discriminant function analysis was used to estimate suitability of the extracted weights for single-subject classification in the prediction set. Receiver operating characteristic (ROC) curves and area under the ROC curve (AUC) were calculated for image-based classifiers and neuropsychological z scores. Results Training accuracy of the SVM was 85% for patients with AD versus control subjects, 72% for patients with bvFTD versus control subjects, and 79% for patients with AD versus patients with bvFTD (P ≤ .029). Single-subject diagnosis in the prediction set when using the discrimination maps yielded accuracies of 88% for patients with AD versus control subjects, 85% for patients with bvFTD versus control subjects, and 82% for patients with AD versus patients with bvFTD, with a good to excellent AUC (range, 0.81-0.95; P ≤ .001). Machine learning-based categorization of AD versus bvFTD based on GM density maps outperforms classification based on neuropsychological test results. Conclusion The SVM can be used in single-subject discrimination and can help the clinician arrive at a diagnosis. The SVM can be used to distinguish disease-specific GM patterns in patients with AD and those with bvFTD as compared with normal aging by using common T1-weighted structural MR imaging. (©) RSNA, 2015.
Proposed U.S. Geological Survey standard for digital orthophotos
Hooper, David; Caruso, Vincent
1991-01-01
The U.S. Geological Survey has added the new category of digital orthophotos to the National Digital Cartographic Data Base. This differentially rectified digital image product enables users to take advantage of the properties of current photoimagery as a source of geographic information. The product and accompanying standard were implemented in spring 1991. The digital orthophotos will be quadrangle based and cast on the Universal Transverse Mercator projection and will extend beyond the 3.75-minute or 7.5-minute quadrangle area at least 300 meters to form a rectangle. The overedge may be used for mosaicking with adjacent digital orthophotos. To provide maximum information content and utility to the user, metadata (header) records exist at the beginning of the digital orthophoto file. Header information includes the photographic source type, date, instrumentation used to create the digital orthophoto, and information relating to the DEM that was used in the rectification process. Additional header information is included on transformation constants from the 1927 and 1983 North American Datums to the orthophoto internal file coordinates to enable the user to register overlays on either datum. The quadrangle corners in both datums are also imprinted on the image. Flexibility has been built into the digital orthophoto format for future enhancements, such as the provision to include the corresponding digital elevation model elevations used to rectify the orthophoto. The digital orthophoto conforms to National Map Accuracy Standards and provides valuable mapping data that can be used as a tool for timely revision of standard map products, for land use and land cover studies, and as a digital layer in a geographic information system.
NASA Astrophysics Data System (ADS)
Denner, Michele; Raubenheimer, Jacobus H.
2018-05-01
Historical aerial photography has become a valuable commodity in any country, as it provides a precise record of historical land management over time. In a developing country, such as South Africa, that has undergone enormous political and social change over the last years, such photography is invaluable as it provides a clear indication of past injustices and serves as an aid to addressing post-apartheid issues such as land reform and land redistribution. National mapping organisations throughout the world have vast repositories of such historical aerial photography. In order to effectively use these datasets in today's digital environment requires that it be georeferenced to an accuracy that is suitable for the intended purpose. Using image-to-image georeferencing techniques, this research sought to determine the accuracies achievable for ortho-rectifying large volumes of historical aerial imagery, against the national standard for ortho-rectification in South Africa, using two different types of scanning equipment. The research conducted four tests using aerial photography from different time epochs over a period of sixty years, where the ortho-rectification matched each test to an already ortho-rectified mosaic of a developed area of mixed land use. The results of each test were assessed in terms of visual accuracy, spatial accuracy and conformance to the national standard for ortho-rectification in South Africa. The results showed a decrease in the overall accuracy of the image as the epoch range between the historical image and the reference image increased. Recommendations on the applications possible given the different epoch ranges and scanning equipment used are provided.
NASA Technical Reports Server (NTRS)
Kahn, W. D.
1984-01-01
The spaceborne gravity gradiometer is a potential sensor for mapping the fine structure of the Earth's gravity field. Error analyses were performed to investigate the accuracy of the determination of the Earth's gravity field from a gravity field satellite mission. The orbital height of the spacecraft is the dominating parameter as far as gravity field resolution and accuracies are concerned.
A Bayesian approach to the creation of a study-customized neonatal brain atlas
Zhang, Yajing; Chang, Linda; Ceritoglu, Can; Skranes, Jon; Ernst, Thomas; Mori, Susumu; Miller, Michael I.; Oishi, Kenichi
2014-01-01
Atlas-based image analysis (ABA), in which an anatomical “parcellation map” is used for parcel-by-parcel image quantification, is widely used to analyze anatomical and functional changes related to brain development, aging, and various diseases. The parcellation maps are often created based on common MRI templates, which allow users to transform the template to target images, or vice versa, to perform parcel-by-parcel statistics, and report the scientific findings based on common anatomical parcels. The use of a study-specific template, which represents the anatomical features of the study population better than common templates, is preferable for accurate anatomical labeling; however, the creation of a parcellation map for a study-specific template is extremely labor intensive, and the definitions of anatomical boundaries are not necessarily compatible with those of the common template. In this study, we employed a Volume-based Template Estimation (VTE) method to create a neonatal brain template customized to a study population, while keeping the anatomical parcellation identical to that of a common MRI atlas. The VTE was used to morph the standardized parcellation map of the JHU-neonate-SS atlas to capture the anatomical features of a study population. The resultant “study-customized” T1-weighted and diffusion tensor imaging (DTI) template, with three-dimensional anatomical parcellation that defined 122 brain regions, was compared with the JHU-neonate-SS atlas, in terms of the registration accuracy. A pronounced increase in the accuracy of cortical parcellation and superior tensor alignment were observed when the customized template was used. With the customized atlas-based analysis, the fractional anisotropy (FA) detected closely approximated the manual measurements. This tool provides a solution for achieving normalization-based measurements with increased accuracy, while reporting scientific findings in a consistent framework. PMID:25026155
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
A spatial haplotype copying model with applications to genotype imputation.
Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan
2015-05-01
Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.
Revisiting measuring colour gamut of the color-reproducing system: interpretation aspects
NASA Astrophysics Data System (ADS)
Sysuev, I. A.; Varepo, L. G.; Trapeznikova, O. V.
2018-04-01
According to the ISO standard, the color gamut body volume is used to evaluate the color reproduction quality. The specified volume describes the number of colors that are in a certain area of the color space. There are ways for evaluating the reproduction quality of a multi-colour image using numerical integration methods, but this approach does not provide high accuracy of the analysis. In this connection, the task of increasing the accuracy of the color reproduction evaluation is still relevant. In order to determine the color mass of a color space area, it is suggested to select the necessary color density values from a map corresponding to a given degree of sampling, excluding its mathematical calculations, which reflects the practical significance and novelty of this solution.
The Southwest Regional Gap Analysis Project (SW ReGAP) improves upon previous GAP projects conducted in Arizona, Colorado, Nevada, New Mexico, and Utah to provide a
consistent, seamless vegetation map for this large and ecologically diverse geographic region. Nevada's compone...
THE USE OF NTM DATA FOR THE ACCURACY ASSESSMENT OF LANDSAT DERIVED LAND USE/LAND COVER MAPS
National Technical Means (NTM) data were utilized to validate the accuracy of a series of LANDSAT derived Land Use / Land Cover (LU/LC) maps for the time frames mid- I 970s, early- I 990s and mid- I 990s. The area-of-interest for these maps is a 2000 square mile portion of the De...
ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS
Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...
NASA Astrophysics Data System (ADS)
Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei
2017-10-01
In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study
NASA Astrophysics Data System (ADS)
Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George
2018-07-01
Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.
Reaction Decoder Tool (RDT): extracting features from chemical reactions.
Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M
2016-07-01
Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.
Mapping forest tree species over large areas with partially cloudy Landsat imagery
NASA Astrophysics Data System (ADS)
Turlej, K.; Radeloff, V.
2017-12-01
Forests provide numerous services to natural systems and humankind, but which services forest provide depends greatly on their tree species composition. That makes it important to track not only changes in forest extent, something that remote sensing excels in, but also to map tree species. The main goal of our work was to map tree species with Landsat imagery, and to identify how to maximize mapping accuracy by including partially cloudy imagery. Our study area covered one Landsat footprint (26/28) in Northern Wisconsin, USA, with temperate and boreal forests. We selected this area because it contains numerous tree species and variable forest composition providing an ideal study area to test the limits of Landsat data. We quantified how species-level classification accuracy was affected by a) the number of acquisitions, b) the seasonal distribution of observations, and c) the amount of cloud contamination. We classified a single year stack of Landsat-7, and -8 images data with a decision tree algorithm to generate a map of dominant tree species at the pixel- and stand-level. We obtained three important results. First, we achieved producer's accuracies in the range 70-80% and user's accuracies in range 80-90% for the most abundant tree species in our study area. Second, classification accuracy improved with more acquisitions, when observations were available from all seasons, and is the best when images with up to 40% cloud cover are included. Finally, classifications for pure stands were 10 to 30 percentage points better than those for mixed stands. We conclude that including partially cloudy Landsat imagery allows to map forest tree species with accuracies that were previously only possible for rare years with many cloud-free observations. Our approach thus provides important information for both forest management and science.
Accurate Mobile Urban Mapping via Digital Map-Based SLAM †
Roh, Hyunchul; Jeong, Jinyong; Cho, Younggun; Kim, Ayoung
2016-01-01
This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS. PMID:27548175
Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM.
Lagüela, Susana; Dorado, Iago; Gesto, Manuel; Arias, Pedro; González-Aguilera, Diego; Lorenzo, Henrique
2018-03-02
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus 3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm.
Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro
2016-08-15
Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Fagan, Matthew E.; Defries, Ruth S.; Sesnie, Steven E.; Arroyo-Mora, J. Pablo; Soto, Carlomagno; Singh, Aditya; Townsend, Philip A.; Chazdon, Robin L.
2015-01-01
An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-02-11
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Moore, Michael; Zhang, Chaolin; Gantman, Emily Conn; Mele, Aldo; Darnell, Jennifer C.; Darnell, Robert B.
2014-01-01
Summary Identifying sites where RNA binding proteins (RNABPs) interact with target RNAs opens the door to understanding the vast complexity of RNA regulation. UV-crosslinking and immunoprecipitation (CLIP) is a transformative technology in which RNAs purified from in vivo cross-linked RNA-protein complexes are sequenced to reveal footprints of RNABP:RNA contacts. CLIP combined with high throughput sequencing (HITS-CLIP) is a generalizable strategy to produce transcriptome-wide RNA binding maps with higher accuracy and resolution than standard RNA immunoprecipitation (RIP) profiling or purely computational approaches. Applying CLIP to Argonaute proteins has expanded the utility of this approach to mapping binding sites for microRNAs and other small regulatory RNAs. Finally, recent advances in data analysis take advantage of crosslinked-induced mutation sites (CIMS) to refine RNA-binding maps to single-nucleotide resolution. Once IP conditions are established, HITS-CLIP takes approximately eight days to prepare RNA for sequencing. Established pipelines for data analysis, including for CIMS, take 3-4 days. PMID:24407355
Noise pollution mapping approach and accuracy on landscape scales.
Iglesias Merchan, Carlos; Diaz-Balteiro, Luis
2013-04-01
Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.
Bridging scales from satellite to grains: Structural mapping aided by tablet and photogrammetry
NASA Astrophysics Data System (ADS)
Hawemann, Friedrich; Mancktelow, Neil; Pennacchioni, Giorgio; Wex, Sebastian; Camacho, Alfredo
2016-04-01
Bridging scales from satellite to grains: Structural mapping aided by tablet and photogrammetry A fundamental problem in small-scale mapping is linking outcrop observations to the large scale deformation pattern. The evolution of handheld devices such as tablets with integrated GPS and the availability of airborne imagery allows a precise localization of outcrops. Detailed structural geometries can be analyzed through ortho-rectified photo mosaics generated by photogrammetry software. In this study, we use a cheap standard Samsung-tablet (< 300 Euro) to map individual, up to 60 m long shear zones with the tracking option offered by the program Locus Map. Even though GPS accuracy is about 3 m, the relative error from one point to another during tracking is on the order of only about 1 dm. Parts of the shear zone with excellent outcrop are photographed with a standard camera with a relatively wide angle in a mosaic array. An area of about 30 sqm needs about 50 photographs with enough overlap to be used for photogrammetry. The software PhotoScan from Agisoft matches the photographs in a fully automated manner, calculates a 3D model of the outcrop, and has the option to project this as an orthophoto onto a flat surface. This allows original orientations of grain-scale structures to be recorded over areas on a scale up to tens to hundreds of metres. The photo mosaics can then be georeferenced with the aid of the GPS-tracks of the shear zones and included in a GIS. This provides a cheap recording of the structures in high detail. The great advantages over mapping with UAVs (drones) is the resolution (<1mm to >1cm), the independence from weather and energy source, and the low cost.
THEMATIC ACCURACY OF MRLC LAND COVER FOR THE EASTERN UNITED STATES
One objective of the MultiResolution Land Characteristics (MRLC) consortium is to map general land-cover categories for the conterminous United States using Landsat Thematic Mapper (TM) data. Land-cover mapping and classification accuracy assessment are complete for the e...
Thematic accuracy of the 1992 National Land-Cover Data for the western United States
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Yang, L.
2004-01-01
The MultiResolution Land Characteristics (MRLC) consortium sponsored production of the National Land Cover Data (NLCD) for the conterminous United States, using Landsat imagery collected on a target year of 1992 (1992 NLCD). Here we report the thematic accuracy of the 1992 NLCD for the six western mapping regions. Reference data were collected in each region for a probability sample of pixels stratified by map land-cover class. Results are reported for each of the six mapping regions with agreement defined as a match between the primary or alternate reference land-cover label and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Overall accuracy at Anderson Level II was low and variable across the regions, ranging from 38% for the Midwest to 70% for the Southwest. Overall accuracy at Anderson Level I was higher and more consistent across the regions, ranging from 82% to 85% for five of the six regions, but only 74% for the South-central region.
NASA Astrophysics Data System (ADS)
Young, S. L.; Kress, B. T.; Rodriguez, J. V.; McCollough, J. P.
2013-12-01
Operational specifications of space environmental hazards can be an important input used by decision makers. Ideally the specification would come from on-board sensors, but for satellites where that capability is not available another option is to map data from remote observations to the location of the satellite. This requires a model of the physical environment and an understanding of its accuracy for mapping applications. We present a statistical comparison between magnetic field model mappings of solar energetic particle observations made by NOAA's Geostationary Operational Environmental Satellites (GOES) to the location of the Combined Release and Radiation Effects Satellite (CRRES). Because CRRES followed a geosynchronous transfer orbit which precessed in local time this allows us to examine the model accuracy between LEO and GEO orbits across a range of local times. We examine the accuracy of multiple magnetic field models using a variety of statistics and examine their utility for operational purposes.
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Lopez Labrousse, Maite I; Frumovitz, Michael; Guadalupe Patrono, M; Ramirez, Pedro T
2017-09-01
Sentinel lymph node mapping, alone or in combination with pelvic lymphadenectomy, is considered a standard approach in staging of patients with cervical or endometrial cancer [1-3]. The goal of this video is to demonstrate the use of indocyanine green (ICG) and color-segmented fluorescence when performing lymphatic mapping in patients with gynecologic malignancies. Injection of ICG is performed in two cervical sites using 1mL (0.5mL superficial and deep, respectively) at the 3 and 9 o'clock position. Sentinel lymph nodes are identified intraoperatively using the Pinpoint near-infrared imaging system (Novadaq, Ontario, CA). Color-segmented fluorescence is used to image different levels of ICG uptake demonstrating higher levels of perfusion. A color key on the side of the monitor shows the colors that coordinate with different levels of ICG uptake. Color-segmented fluorescence may help surgeons identify true sentinel nodes from fatty tissue that, although absorbing fluorescent dye, does not contain true nodal tissue. It is not intended to differentiate the primary sentinel node from secondary sentinel nodes. The key ranges from low levels of ICG uptake (gray) to the highest rate of ICG uptake (red). Bilateral sentinel lymph nodes are identified along the external iliac vessels using both standard and color-segmented fluorescence. No evidence of disease was noted after ultra-staging was performed in each of the sentinel nodes. Use of ICG in sentinel lymph node mapping allows for high bilateral detection rates. Color-segmented fluorescence may increase accuracy of sentinel lymph node identification over standard fluorescent imaging. The following are the supplementary data related to this article. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Borella, H. M.; Estes, J. E.; Ezra, C. E.; Scepan, J.; Tinney, L. R.
1982-01-01
For two test sites in Pennsylvania the interpretability of commercially acquired low-altitude and existing high-altitude aerial photography are documented in terms of time, costs, and accuracy for Anderson Level II land use/land cover mapping. Information extracted from the imagery is to be used in the evaluation process for siting energy facilities. Land use/land cover maps were drawn at 1:24,000 scale using commercially flown color infrared photography obtained from the United States Geological Surveys' EROS Data Center. Detailed accuracy assessment of the maps generated by manual image analysis was accomplished employing a stratified unaligned adequate class representation. Both 'area-weighted' and 'by-class' accuracies were documented and field-verified. A discrepancy map was also drawn to illustrate differences in classifications between the two map scales. Results show that the 1:24,000 scale map set was more accurate (99% to 94% area-weighted) than the 1:62,500 scale set, especially when sampled by class (96% to 66%). The 1:24,000 scale maps were also more time-consuming and costly to produce, due mainly to higher image acquisition costs.
NASA Astrophysics Data System (ADS)
Robinson, T. P.; Wardell-Johnson, G. W.; Pracilio, G.; Brown, C.; Corner, R.; van Klinken, R. D.
2016-02-01
Invasive plants pose significant threats to biodiversity and ecosystem function globally, leading to costly monitoring and management effort. While remote sensing promises cost-effective, robust and repeatable monitoring tools to support intervention, it has been largely restricted to airborne platforms that have higher spatial and spectral resolutions, but which lack the coverage and versatility of satellite-based platforms. This study tests the ability of the WorldView-2 (WV2) eight-band satellite sensor for detecting the invasive shrub mesquite (Prosopis spp.) in the north-west Pilbara region of Australia. Detectability was challenged by the target taxa being largely defoliated by a leaf-tying biological control agent (Gelechiidae: Evippe sp. #1) and the presence of other shrubs and trees. Variable importance in the projection (VIP) scores identified bands offering greatest capacity for discrimination were those covering the near-infrared, red, and red-edge wavelengths. Wavelengths between 400 nm and 630 nm (coastal blue, blue, green, yellow) were not useful for species level discrimination in this case. Classification accuracy was tested on three band sets (simulated standard multispectral, all bands, and bands with VIP scores ≥1). Overall accuracies were comparable amongst all band-sets (Kappa = 0.71-0.77). However, mesquite omission rates were unacceptably high (21.3%) when using all eight bands relative to the simulated standard multispectral band-set (9.5%) and the band-set informed by VIP scores (11.9%). An incremental cover evaluation on the latter identified most omissions to be for objects <16 m2. Mesquite omissions reduced to 2.6% and overall accuracy significantly improved (Kappa = 0.88) when these objects were left out of the confusion matrix calculations. Very high mapping accuracy of objects >16 m2 allows application for mapping mesquite shrubs and coalesced stands, the former not previously possible, even with 3 m resolution hyperspectral imagery. WV2 imagery offers excellent portability potential for detecting other species where spectral/spatial resolution or coverage has been an impediment. New generation satellite sensors are removing barriers previously preventing widespread adoption of remote sensing technologies in natural resource management.
NASA Astrophysics Data System (ADS)
Kamal, Muhammad; Johansen, Kasper
2017-10-01
Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.
Structures data collection for The National Map using volunteered geographic information
Poore, Barbara S.; Wolf, Eric B.; Korris, Erin M.; Walter, Jennifer L.; Matthews, Greg D.
2012-01-01
The U.S. Geological Survey (USGS) has historically sponsored volunteered data collection projects to enhance its topographic paper and digital map products. This report describes one phase of an ongoing project to encourage volunteers to contribute data to The National Map using online editing tools. The USGS recruited students studying geographic information systems (GIS) at the University of Colorado Denver and the University of Denver in the spring of 2011 to add data on structures - manmade features such as schools, hospitals, and libraries - to four quadrangles covering metropolitan Denver. The USGS customized a version of the online Potlatch editor created by the OpenStreetMap project and populated it with 30 structure types drawn from the Geographic Names Information System (GNIS), a USGS database of geographic features. The students corrected the location and attributes of these points and added information on structures that were missing. There were two rounds of quality control. Student volunteers reviewed each point, and an in-house review of each point by the USGS followed. Nine-hundred and thirty-eight structure points were initially downloaded from the USGS database. Editing and quality control resulted in 1,214 structure points that were subsequently added to The National Map. A post-project analysis of the data shows that after student edit and peer review, 92 percent of the points contributed by volunteers met National Map Accuracy Standards for horizontal accuracy. Lessons from this project will be applied to later phases. These include: simplifying editing tasks and the user interfaces, stressing to volunteers the importance of adding structures that are missing, and emphasizing the importance of conforming to editorial guidelines for formatting names and addresses of structures. The next phase of the project will encompass the entire State of Colorado and will allow any citizen to contribute structures data. Volunteers will benefit from this project by engaging with their local geography and contributing to a national resource of topographic information that remains in the public domain for anyone to download.
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal
2016-01-01
Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal
2017-01-01
Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Orbit Determination Issues for Libration Point Orbits
NASA Technical Reports Server (NTRS)
Beckman, Mark; Bauer, Frank (Technical Monitor)
2002-01-01
Libration point mission designers require knowledge of orbital accuracy for a variety of analyses including station keeping control strategies, transfer trajectory design, and formation and constellation control. Past publications have detailed orbit determination (OD) results from individual libration point missions. This paper collects both published and unpublished results from four previous libration point missions (ISEE (International Sun-Earth Explorer) -3, SOHO (Solar and Heliospheric Observatory), ACE (Advanced Composition Explorer) and MAP (Microwave Anisotropy Probe)) supported by Goddard Space Flight Center's Guidance, Navigation & Control Center. The results of those missions are presented along with OD issues specific to each mission. All past missions have been limited to ground based tracking through NASA ground sites using standard range and Doppler measurement types. Advanced technology is enabling other OD options including onboard navigation using seaboard attitude sensors and the use of the Very Long Baseline Interferometry (VLBI) measurement Delta Differenced One-Way Range (DDOR). Both options potentially enable missions to reduce coherent dedicated tracking passes while maintaining orbital accuracy. With the increased projected loading of the DSN (Deep Space Network), missions must find alternatives to the standard OD scenario.
Efficient nonparametric n -body force fields from machine learning
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro
2018-05-01
We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.
B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel
2015-01-01
The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging. PMID:26179014
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Methodology, and the Statistician’s Responsibility for BOTH Accuracy and Relevance
1975-12-01
John W. Tukey is also Associate Executive Director- Research , Beel Telephone Laboratories . I ! u I». Ktv UBS fSSSi m m!mSi S5 i wtiwrii S...eoNmokkiNO erriet NAM AND ACOHCM Office of Naval Research (Code «»36) Arlington, VA 22217 Z^lp&wlliMCTioiii -./ r^ aCII>l«Mrt CATALÖO...558» ir SS3 mSSm ~~~’ level or change, least squares, fit PLUS residuals, Phillips curve, patch maps, standardization ■A AMTRACT
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.
Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E
2016-09-01
Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension, Ltd 2016. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe
NASA Astrophysics Data System (ADS)
Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David
2018-06-01
An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Accuracy and precision of stream reach water surface slopes estimated in the field and from maps
Isaak, D.J.; Hubert, W.A.; Krueger, K.L.
1999-01-01
The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.
Multistrategy Self-Organizing Map Learning for Classification Problems
Hasan, S.; Shamsuddin, S. M.
2011-01-01
Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The geometric quality of TM film and digital products is evaluated by making selective photomeasurements and by measuring the coordinates of known features on both the TM products and map products. These paired observations are related using a standard linear least squares regression approach. Using regression equations and coefficients developed from 225 (TM film product) and 20 (TM digital product) control points, map coordinates of test points are predicted. The residual error vectors and analysis of variance (ANOVA) were performed on the east and north residual using nine image segments (blocks) as treatments. Based on the root mean square error of the 223 (TM film product) and 22 (TM digital product) test points, users of TM data expect the planimetric accuracy of mapped points to be within 91 meters and within 117 meters for the film products, and to be within 12 meters and within 14 meters for the digital products.
Assessment of a visually guided autonomous exploration robot
NASA Astrophysics Data System (ADS)
Harris, C.; Evans, R.; Tidey, E.
2008-10-01
A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.
APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES
An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
(LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, E; Shu, H; Cordova, J
Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less
Evolution of tsunami warning systems and products.
Bernard, Eddie; Titov, Vasily
2015-10-28
Each year, about 60 000 people and $4 billion (US$) in assets are exposed to the global tsunami hazard. Accurate and reliable tsunami warning systems have been shown to provide a significant defence for this flooding hazard. However, the evolution of warning systems has been influenced by two processes: deadly tsunamis and available technology. In this paper, we explore the evolution of science and technology used in tsunami warning systems, the evolution of their products using warning technologies, and offer suggestions for a new generation of warning products, aimed at the flooding nature of the hazard, to reduce future tsunami impacts on society. We conclude that coastal communities would be well served by receiving three standardized, accurate, real-time tsunami warning products, namely (i) tsunami energy estimate, (ii) flooding maps and (iii) tsunami-induced harbour current maps to minimize the impact of tsunamis. Such information would arm communities with vital flooding guidance for evacuations and port operations. The advantage of global standardized flooding products delivered in a common format is efficiency and accuracy, which leads to effectiveness in promoting tsunami resilience at the community level. © 2015 The Authors.
Evolution of tsunami warning systems and products
Bernard, Eddie; Titov, Vasily
2015-01-01
Each year, about 60 000 people and $4 billion (US$) in assets are exposed to the global tsunami hazard. Accurate and reliable tsunami warning systems have been shown to provide a significant defence for this flooding hazard. However, the evolution of warning systems has been influenced by two processes: deadly tsunamis and available technology. In this paper, we explore the evolution of science and technology used in tsunami warning systems, the evolution of their products using warning technologies, and offer suggestions for a new generation of warning products, aimed at the flooding nature of the hazard, to reduce future tsunami impacts on society. We conclude that coastal communities would be well served by receiving three standardized, accurate, real-time tsunami warning products, namely (i) tsunami energy estimate, (ii) flooding maps and (iii) tsunami-induced harbour current maps to minimize the impact of tsunamis. Such information would arm communities with vital flooding guidance for evacuations and port operations. The advantage of global standardized flooding products delivered in a common format is efficiency and accuracy, which leads to effectiveness in promoting tsunami resilience at the community level. PMID:26392620
Sentinel node mapping for gastric cancer: a prospective multicenter trial in Japan.
Kitagawa, Yuko; Takeuchi, Hiroya; Takagi, Yu; Natsugoe, Shoji; Terashima, Masanori; Murakami, Nozomu; Fujimura, Takashi; Tsujimoto, Hironori; Hayashi, Hideki; Yoshimizu, Nobunari; Takagane, Akinori; Mohri, Yasuhiko; Nabeshima, Kazuhito; Uenosono, Yoshikazu; Kinami, Shinichi; Sakamoto, Junichi; Morita, Satoshi; Aikou, Takashi; Miwa, Koichi; Kitajima, Masaki
2013-10-10
Complicated gastric lymphatic drainage potentially undermines the utility of sentinel node (SN) biopsy in patients with gastric cancer. Encouraged by several favorable single-institution reports, we conducted a multicenter, single-arm, phase II study of SN mapping that used a standardized dual tracer endoscopic injection technique. Patients with previously untreated cT1 or cT2 gastric adenocarcinomas < 4 cm in gross diameter were eligible for inclusion in this study. SN mapping was performed by using a standardized dual tracer endoscopic injection technique. Following biopsy of the identified SNs, mandatory comprehensive D2 or modified D2 gastrectomy was performed according to current Japanese Gastric Cancer Association guidelines. Among 433 patients who gave preoperative consent, 397 were deemed eligible on the basis of surgical findings. SN biopsy was performed in all patients, and the SN detection rate was 97.5% (387 of 397). Of 57 patients with lymph node metastasis by conventional hematoxylin and eosin staining, 93% (53 of 57) had positive SNs, and the accuracy of nodal evaluation for metastasis was 99% (383 of 387). Only four false-negative SN biopsies were observed, and pathologic analysis revealed that three of those biopsies were pT2 or tumors > 4 cm. We observed no serious adverse effects related to endoscopic tracer injection or the SN mapping procedure. The endoscopic dual tracer method for SN biopsy was confirmed as safe and effective when applied to the superficial, relatively small gastric adenocarcinomas included in this study.
Laba, M.; Downs, R.; Smith, S.; Welsh, S.; Neider, C.; White, S.; Richmond, M.; Philpot, W.; Baveye, P.
2008-01-01
The National Estuarine Research Reserve (NERR) program is a nationally coordinated research and monitoring program that identifies and tracks changes in ecological resources of representative estuarine ecosystems and coastal watersheds. In recent years, attention has focused on using high spatial and spectral resolution satellite imagery to map and monitor wetland plant communities in the NERRs, particularly invasive plant species. The utility of this technology for that purpose has yet to be assessed in detail. To that end, a specific high spatial resolution satellite imagery, QuickBird, was used to map plant communities and monitor invasive plants within the Hudson River NERR (HRNERR). The HRNERR contains four diverse tidal wetlands (Stockport Flats, Tivoli Bays, Iona Island, and Piermont), each with unique water chemistry (i.e., brackish, oligotrophic and fresh) and, consequently, unique assemblages of plant communities, including three invasive plants (Trapa natans, Phragmites australis, and Lythrum salicaria). A maximum-likelihood classification was used to produce 20-class land cover maps for each of the four marshes within the HRNERR. Conventional contingency tables and a fuzzy set analysis served as a basis for an accuracy assessment of these maps. The overall accuracies, as assessed by the contingency tables, were 73.6%, 68.4%, 67.9%, and 64.9% for Tivoli Bays, Stockport Flats, Piermont, and Iona Island, respectively. Fuzzy assessment tables lead to higher estimates of map accuracies of 83%, 75%, 76%, and 76%, respectively. In general, the open water/tidal channel class was the most accurately mapped class and Scirpus sp. was the least accurately mapped. These encouraging accuracies suggest that high-resolution satellite imagery offers significant potential for the mapping of invasive plant species in estuarine environments. ?? 2007 Elsevier Inc. All rights reserved.
Machine learning-based dual-energy CT parametric mapping
NASA Astrophysics Data System (ADS)
Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W.; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Helo, Rose Al; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C.; Rassouli, Negin; Gilkeson, Robert C.; Traughber, Bryan J.; Cheng, Chee-Wai; Muzic, Raymond F., Jr.
2018-06-01
The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.
Machine learning-based dual-energy CT parametric mapping.
Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Al Helo, Rose; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C; Rassouli, Negin; Gilkeson, Robert C; Traughber, Bryan J; Cheng, Chee-Wai; Muzic, Raymond F
2018-06-08
The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Z eff ), relative electron density (ρ e ), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.
Accuracy of three-dimensional multislice view Doppler in diagnosis of morbid adherent placenta
Abdel Moniem, Alaa M.; Ibrahim, Ahmed; Akl, Sherif A.; Aboul-Enen, Loay; Abdelazim, Ibrahim A.
2015-01-01
Objective To detect the accuracy of the three-dimensional multislice view (3D MSV) Doppler in the diagnosis of morbid adherent placenta (MAP). Material and Methods Fifty pregnant women at ≥28 weeks gestation with suspected MAP were included in this prospective study. Two dimensional (2D) trans-abdominal gray-scale ultrasound scan was performed for the subjects to confirm the gestational age, placental location, and findings suggestive of MAP, followed by the 3D power Doppler and then the 3D MSV Doppler to confirm the diagnosis of MAP. Intraoperative findings and histopathology results of removed uteri in cases managed by emergency hysterectomy were compared with preoperative sonographic findings to detect the accuracy of the 3D MSV Doppler in the diagnosis of MAP. Results The 3D MSV Doppler increased the accuracy and predictive values of the diagnostic criteria of MAP compared with the 3D power Doppler. The sensitivity and negative predictive value (NPV) (79.6% and 82.2%, respectively) of crowded vessels over the peripheral sub-placental zone to detect difficult placental separation and considerable intraoperative blood loss in cases of MAP using the 3D power Doppler was increased to 82.6% and 84%, respectively, using the 3D MSV Doppler. In addition, the sensitivity, specificity, and positive predictive value (PPV) (90.9%, 68.8%, and 47%, respectively) of the disruption of the uterine serosa-bladder interface for the detection of emergency hysterectomy in cases of MAP using the 3D power Doppler was increased to 100%, 71.8%, and 50%, respectively, using the 3D MSV Doppler. Conclusion The 3D MSV Doppler is a useful adjunctive tool to the 3D power Doppler or color Doppler to refine the diagnosis of MAP. PMID:26401104
Haufe, William M; Wolfson, Tanya; Hooker, Catherine A; Hooker, Jonathan C; Covarrubias, Yesenia; Schlein, Alex N; Hamilton, Gavin; Middleton, Michael S; Angeles, Jorge E; Hernando, Diego; Reeder, Scott B; Schwimmer, Jeffrey B; Sirlin, Claude B
2017-12-01
To assess and compare the accuracy of magnitude-based magnetic resonance imaging (MRI-M) and complex-based MRI (MRI-C) for estimating hepatic proton density fat fraction (PDFF) in children, using MR spectroscopy (MRS) as the reference standard. A secondary aim was to assess the agreement between MRI-M and MRI-C. This was a HIPAA-compliant, retrospective analysis of data collected in children enrolled in prospective, Institutional Review Board (IRB)-approved studies between 2012 and 2014. Informed consent was obtained from 200 children (ages 8-19 years) who subsequently underwent 3T MR exams that included MRI-M, MRI-C, and T 1 -independent, T 2 -corrected, single-voxel stimulated echo acquisition mode (STEAM) MRS. Both MRI methods acquired six echoes at low flip angles. T2*-corrected PDFF parametric maps were generated. PDFF values were recorded from regions of interest (ROIs) drawn on the maps in each of the nine Couinaud segments and three ROIs colocalized to the MRS voxel location. Regression analyses assessing agreement with MRS were performed to evaluate the accuracy of each MRI method, and Bland-Altman and intraclass correlation coefficient (ICC) analyses were performed to assess agreement between the MRI methods. MRI-M and MRI-C PDFF were accurate relative to the colocalized MRS reference standard, with regression intercepts of 0.63% and -0.07%, slopes of 0.998 and 0.975, and proportion-of-explained-variance values (R 2 ) of 0.982 and 0.979, respectively. For individual Couinaud segments and for the whole liver averages, Bland-Altman biases between MRI-M and MRI-C were small (ranging from 0.04 to 1.11%) and ICCs were high (≥0.978). Both MRI-M and MRI-C accurately estimated hepatic PDFF in children, and high intermethod agreement was observed. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1641-1647. © 2017 International Society for Magnetic Resonance in Medicine.
Javidi, Bahram; Markman, Adam; Rawat, Siddharth; O'Connor, Timothy; Anand, Arun; Andemariam, Biree
2018-05-14
We present a spatio-temporal analysis of cell membrane fluctuations to distinguish healthy patients from patients with sickle cell disease. A video hologram containing either healthy red blood cells (h-RBCs) or sickle cell disease red blood cells (SCD-RBCs) was recorded using a low-cost, compact, 3D printed shearing interferometer. Reconstructions were created for each hologram frame (time steps), forming a spatio-temporal data cube. Features were extracted by computing the standard deviations and the mean of the height fluctuations over time and for every location on the cell membrane, resulting in two-dimensional standard deviation and mean maps, followed by taking the standard deviations of these maps. The optical flow algorithm was used to estimate the apparent motion fields between subsequent frames (reconstructions). The standard deviation of the magnitude of the optical flow vectors across all frames was then computed. In addition, seven morphological cell (spatial) features based on optical path length were extracted from the cells to further improve the classification accuracy. A random forest classifier was trained to perform cell identification to distinguish between SCD-RBCs and h-RBCs. To the best of our knowledge, this is the first report of machine learning assisted cell identification and diagnosis of sickle cell disease based on cell membrane fluctuations and morphology using both spatio-temporal and spatial analysis.
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
NASA Astrophysics Data System (ADS)
Klaas, U.; Balog, Z.; Nielbock, M.; Müller, T. G.; Linz, H.; Kiss, Cs.
2018-05-01
Aims: Our aims are to determine flux densities and their photometric accuracy for a set of seventeen stars that range in flux from intermediately bright (≲2.5 Jy) to faint (≳5 mJy) in the far-infrared (FIR). We also aim to derive signal-to-noise dependence with flux and time, and compare the results with predictions from the Herschel exposure-time calculation tool. Methods: We obtain aperture photometry from Herschel-PACS high-pass-filtered scan maps and chop/nod observations of the faint stars. The issues of detection limits and sky confusion noise are addressed by comparison of the field-of-view at different wavelengths, by multi-aperture photometry, by special processing of the maps to preserve extended emission, and with the help of large-scale absolute sky brightness maps from AKARI. This photometry is compared with flux-density predictions based on photospheric models for these stars. We obtain a robust noise estimate by fitting the flux distribution per map pixel histogram for the area around the stars, scaling it for the applied aperture size and correcting for noise correlation. Results: For 15 stars we obtain reliable photometry in at least one PACS filter, and for 11 stars we achieve this in all three PACS filters (70, 100, 160 μm). Faintest fluxes, for which the photometry still has good quality, are about 10-20 mJy with scan map photometry. The photometry of seven stars is consistent with models or flux predictions for pure photospheric emission, making them good primary standard candidates. Two stars exhibit source-intrinsic far-infrared excess: β Gem (Pollux), being the host star of a confirmed Jupiter-size exoplanet, due to emission of an associated dust disk, and η Dra due to dust emission in a binary system with a K1 dwarf. The investigation of the 160 μm sky background and environment of four sources reveals significant sky confusion prohibiting the determination of an accurate stellar flux at this wavelength. As a good model approximation, for nine stars we obtain scaling factors of the continuum flux models of four PACS fiducial standards with the same or quite similar spectral type. We can verify a linear dependence of signal-to-noise ratio (S/N) with flux and with square root of time over significant ranges. At 160 μm the latter relation is, however, affected by confusion noise. Conclusions: The PACS faint star sample has allowed a comprehensive sensitivity assessment of the PACS photometer. Accurate photometry allows us to establish a set of five FIR primary standard candidates, namely α Ari, ɛ Lep, ω Cap, HD 41047 and 42 Dra, which are 2-20 times fainter than the faintest PACS fiducial standard (γ Dra) with absolute accuracy of <6%. For three of these primary standard candidates, essential stellar parameters are known, meaning that a dedicated flux model code may be run. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Tables A.3 to A.5 and B.1 to B.3 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/613/A40
Assessment of Version 4 of the SMAP Passive Soil Moisture Standard Product
NASA Technical Reports Server (NTRS)
O'neill, P. O.; Chan, S.; Bindlish, R.; Jackson, T.; Colliander, A.; Dunbar, R.; Chen, F.; Piepmeier, Jeffrey R.; Yueh, S.; Entekhabi, D.;
2017-01-01
NASAs Soil Moisture Active Passive (SMAP) mission launched on January 31, 2015 into a sun-synchronous 6 am6 pm orbit with an objective to produce global mapping of high-resolution soil moisture and freeze-thaw state every 2-3 days. The SMAP radiometer began acquiring routine science data on March 31, 2015 and continues to operate nominally. SMAPs radiometer-derived standard soil moisture product (L2SMP) provides soil moisture estimates posted on a 36-km fixed Earth grid using brightness temperature observations and ancillary data. A beta quality version of L2SMP was released to the public in October, 2015, Version 3 validated L2SMP soil moisture data were released in May, 2016, and Version 4 L2SMP data were released in December, 2016. Version 4 data are processed using the same soil moisture retrieval algorithms as previous versions, but now include retrieved soil moisture from both the 6 am descending orbits and the 6 pm ascending orbits. Validation of 19 months of the standard L2SMP product was done for both AM and PM retrievals using in situ measurements from global core calval sites. Accuracy of the soil moisture retrievals averaged over the core sites showed that SMAP accuracy requirements are being met.
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM
Dorado, Iago; Gesto, Manuel; Arias, Pedro; Lorenzo, Henrique
2018-01-01
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm. PMID:29498715
NASA Technical Reports Server (NTRS)
Karteris, M. A. (Principal Investigator)
1980-01-01
A winter black and white band 5, a winter color, a fall color, and a diazo color composite of the fall scene were used to assess the use and potential of LANDSAT images for mapping and estimating acreage of small scattered forest tracts in Barry County, Michigan. Forests as small as 2.5 acres were mapped from each LANDSAT data source. The maps for each image were compared with an available forest-type map. Mapping errors detected were categorized as boundary and identification errors. The most frequently misclassified areas were agriculture lands, treed-bogs, brushlands and lowland and mixed hardwood stands. Stocking level affected interpretation more than stand size. The overall level of the interpretation performance was expressed through the estimation of classification, interpretation, and mapping accuracies. These accuracies ranged from 74 between 74% and 98%. Considering errors, accuracy, and cost, winter color imagery is the best LANDSAT alternative for mapping small forest tracts. However, since the availability of cloud-free winter images of the study area is significantly lower than images for other seasons, a diazo enhanced image of a fall scene is recommended as the best next best alternative.
NASA Astrophysics Data System (ADS)
Massey, Richard
Cropland characteristics and accurate maps of their spatial distribution are required to develop strategies for global food security by continental-scale assessments and agricultural land use policies. North America is the major producer and exporter of coarse grains, wheat, and other crops. While cropland characteristics such as crop types are available at country-scales in North America, however, at continental-scale cropland products are lacking at fine sufficient resolution such as 30m. Additionally, applications of automated, open, and rapid methods to map cropland characteristics over large areas without the need of ground samples are needed on efficient high performance computing platforms for timely and long-term cropland monitoring. In this study, I developed novel, automated, and open methods to map cropland extent, crop intensity, and crop types in the North American continent using large remote sensing datasets on high-performance computing platforms. First, a novel method was developed in this study to fuse pixel-based classification of continental-scale Landsat data using Random Forest algorithm available on Google Earth Engine cloud computing platform with an object-based classification approach, recursive hierarchical segmentation (RHSeg) to map cropland extent at continental scale. Using the fusion method, a continental-scale cropland extent map for North America at 30m spatial resolution for the nominal year 2010 was produced. In this map, the total cropland area for North America was estimated at 275.2 million hectares (Mha). This map was assessed for accuracy using randomly distributed samples derived from United States Department of Agriculture (USDA) cropland data layer (CDL), Agriculture and Agri-Food Canada (AAFC) annual crop inventory (ACI), Servicio de Informacion Agroalimentaria y Pesquera (SIAP), Mexico's agricultural boundaries, and photo-interpretation of high-resolution imagery. The overall accuracies of the map are 93.4% with a producer's accuracy for crop class at 85.4% and user's accuracy of 74.5% across the continent. The sub-country statistics including state-wise and county-wise cropland statistics derived from this map compared well in regression models resulting in R2 > 0.84. Secondly, an automated phenological pattern matching (PPM) method to efficiently map cropping intensity was also developed in this study. This study presents a continental-scale cropping intensity map for the North American continent at 250m spatial resolution for 2010. In this map, the total areas for single crop, double crop, continuous crop, and fallow were estimated to be 123.5 Mha, 11.1 Mha, 64.0 Mha, and 83.4 Mha, respectively. This map was assessed using limited country-level reference datasets derived from United States Department of Agriculture cropland data layer and Agriculture and Agri-Food Canada annual crop inventory with overall accuracies of 79.8% and 80.2%, respectively. Third, two novel and automated decision tree classification approaches to map crop types across the conterminous United States (U.S.) using MODIS 250 m resolution data: 1) generalized, and 2) year-specific classification were developed. The classification approaches use similarities and dissimilarities in crop type phenology derived from NDVI time-series data for the two approaches. Annual crop type maps were produced for 8 major crop types in the United States using the generalized classification approach for 2001-2014 and the year-specific approach for 2008, 2010, 2011 and 2012. The year-specific classification had overall accuracies greater than 78%, while the generalized classifier had accuracies greater than 75% for the conterminous U.S. for 2008, 2010, 2011, and 2012. The generalized classifier enables automated and routine crop type mapping without repeated and expensive ground sample collection year after year with overall accuracies > 70% across all independent years. Taken together, these cropland products of extent, cropping intensity, and crop types, are significantly beneficial in agricultural and water use planning and monitoring to formulate policies towards global and North American food security issues.
NASA Astrophysics Data System (ADS)
Tane, Z.; Ramirez, C.; Roberts, D. A.; Koltunov, A.; Sweeney, S.
2016-12-01
There is considerable scientific and public interest in the ongoing drought and bark beetle driven conifer mortality in the Central and Southern Sierra Nevada, the scale of which has not been seen previously in California's recorded history. Just before and during this mortality event (2013-2016), Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) data were acquired seasonally over part of the affected area as part of the HyspIRI Preparatory Mission. In this study, we used 11 AVIRIS flight lines from 8 seasonal flights (from spring 2013 to summer 2015) to detect conifer mortality. In addition to the standard pre-processing completed by NASA's Jet Propulsion Lab, AVIRIS images were co-registered and georeferenced between time steps and images were resampled to the spatial resolution and signal-to-noise ratio expected from the proposed HyspIRI satellite. We used summer 2015 high-spatial resolution WorldView-2 and WorldView-3 images from across the study area to collect training data from five scenes, and independent validation data from five additional scenes. A cover class map developed with a machine-learning algorithm, separated pixels into green conifer, red-attack conifer, and non-conifer dominant cover, yielding a high accuracy (above 85% accuracy on the independent validation data) in the tree mortality final map. Discussion will include the effects of temporal information and input dimensionality on classification accuracy, comparison with multi-spectral classification accuracy, the ecological and forest management implications of this work, incorporating 2016 AVIRS images to detect 2016 mortality, and future work in understanding the spatial patterns underlying the mortality.
NASA Astrophysics Data System (ADS)
Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.
2017-10-01
Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.
Chosen Aspects of the Production of the Basic Map Using Uav Imagery
NASA Astrophysics Data System (ADS)
Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.
2016-06-01
For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.
Walter, Brittany S; Schultz, John J
2013-05-10
Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for accurate scene mapping is discussed and guidelines concerning the implementation of this technology for mapping human scattered skeletal remains in open environments are provided. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Butera, M. K.
1979-01-01
The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.
NASA Astrophysics Data System (ADS)
Grigsby, S.; Hulley, G. C.; Roberts, D. A.; Scheele, C. J.; Ustin, S.; Alsina, M. M.
2014-12-01
Land surface temperature (LST) is an important parameter in many ecological studies, where processes such as evapotranspiration have impacts at temperature gradients less than 1 K. Current errors in standard MODIS and ASTER LST products are greater than 1 K, and for ASTER can be greater than 2 K in humid conditions due to incomplete atmospheric correction of atmospheric water vapor. Estimates of water vapor, either derived from visible-to-shortwave-infrared (VSWIR) remote sensing data or taken from weather simulation data such as NCEP, can be combined with coincident Thermal-Infrared (TIR) remote sensing data to yield improved accuracy in LST measurements. This study compares LST retrieval accuracies derived using the standard JPL MASTER Temperature Emissivity Separation (TES) algorithm, and the Water Vapor Scaling (WVS) atmospheric correction method proposed for the Hyperspectral Infrared Imager, or HyspIRI, mission with ground observations. The 2011 ER-2 Delano/Lost Hills flights acquired TIR data from the MODIS/ASTER Simulator (MASTER) and VSWIR data from Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) instruments flown concurrently. The TES and WVS retrieval methods are run with and without high spatial resolution AVIRIS-derived water vapor maps to assess the improvement using VSWIR water vapor estimates. We find improvement using VSWIR derived water vapor maps in both cases, with the WVS method being most accurate overall. For closed canopy agricultural vegetation we observed canopy temperature retrieval RMSEs of 0.49 K and 0.70 K using the WVS method on MASTER data with and without AVIRIS derived water vapor, respectively.
Middle-School Students' Map Construction: Understanding Complex Spatial Displays.
ERIC Educational Resources Information Center
Bausmith, Jennifer Merriman; Leinhardt, Gaea
1998-01-01
Examines the map-making process of middle-school students to determine which actions influence their accuracy, how prior knowledge helps their map construction, and what lessons can be learned from map making. Indicates that instruction that focuses on recognition of interconnections between map elements can promote map reasoning skills. (DSK)
NASA Astrophysics Data System (ADS)
Onojeghuo, Alex Okiemute; Onojeghuo, Ajoke Ruth
2017-07-01
This study investigated the combined use of multispectral/hyperspectral imagery and LiDAR data for habitat mapping across parts of south Cumbria, North West England. The methodology adopted in this study integrated spectral information contained in pansharp QuickBird multispectral/AISA Eagle hyperspectral imagery and LiDAR-derived measures with object-based machine learning classifiers and ensemble analysis techniques. Using the LiDAR point cloud data, elevation models (such as the Digital Surface Model and Digital Terrain Model raster) and intensity features were extracted directly. The LiDAR-derived measures exploited in this study included Canopy Height Model, intensity and topographic information (i.e. mean, maximum and standard deviation). These three LiDAR measures were combined with spectral information contained in the pansharp QuickBird and Eagle MNF transformed imagery for image classification experiments. A fusion of pansharp QuickBird multispectral and Eagle MNF hyperspectral imagery with all LiDAR-derived measures generated the best classification accuracies, 89.8 and 92.6% respectively. These results were generated with the Support Vector Machine and Random Forest machine learning algorithms respectively. The ensemble analysis of all three learning machine classifiers for the pansharp QuickBird and Eagle MNF fused data outputs did not significantly increase the overall classification accuracy. Results of the study demonstrate the potential of combining either very high spatial resolution multispectral or hyperspectral imagery with LiDAR data for habitat mapping.
Intelligent retrieval of medical images from the Internet
NASA Astrophysics Data System (ADS)
Tang, Yau-Kuo; Chiang, Ted T.
1996-05-01
The object of this study is using Internet resources to provide a cost-effective, user-friendly method to access the medical image archive system and to provide an easy method for the user to identify the images required. This paper describes the prototype system architecture, the implementation, and results. In the study, we prototype the Intelligent Medical Image Retrieval (IMIR) system as a Hypertext Transport Prototype server and provide Hypertext Markup Language forms for user, as an Internet client, using browser to enter image retrieval criteria for review. We are developing the intelligent retrieval engine, with the capability to map the free text search criteria to the standard terminology used for medical image identification. We evaluate retrieved records based on the number of the free text entries matched and their relevance level to the standard terminology. We are in the integration and testing phase. We have collected only a few different types of images for testing and have trained a few phrases to map the free text to the standard medical terminology. Nevertheless, we are able to demonstrate the IMIR's ability to search, retrieve, and review medical images from the archives using general Internet browser. The prototype also uncovered potential problems in performance, security, and accuracy. Additional studies and enhancements will make the system clinically operational.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Bühler, Yves; Marty, Mauro; Korup, Oliver
2017-10-01
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km-2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79-0.85. Testing the method for a larger area of 226.3 km-2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
Joseph, Arun A; Kalentev, Oleksandr; Merboldt, Klaus-Dietmar; Voit, Dirk; Roeloffs, Volkert B; van Zalk, Maaike; Frahm, Jens
2016-01-01
Objective: To develop a novel method for rapid myocardial T1 mapping at high spatial resolution. Methods: The proposed strategy represents a single-shot inversion recovery experiment triggered to early diastole during a brief breath-hold. The measurement combines an adiabatic inversion pulse with a real-time readout by highly undersampled radial FLASH, iterative image reconstruction and T1 fitting with automatic deletion of systolic frames. The method was implemented on a 3-T MRI system using a graphics processing unit-equipped bypass computer for online application. Validations employed a T1 reference phantom including analyses at simulated heart rates from 40 to 100 beats per minute. In vivo applications involved myocardial T1 mapping in short-axis views of healthy young volunteers. Results: At 1-mm in-plane resolution and 6-mm section thickness, the inversion recovery measurement could be shortened to 3 s without compromising T1 quantitation. Phantom studies demonstrated T1 accuracy and high precision for values ranging from 300 to 1500 ms and up to a heart rate of 100 beats per minute. Similar results were obtained in vivo yielding septal T1 values of 1246 ± 24 ms (base), 1256 ± 33 ms (mid-ventricular) and 1288 ± 30 ms (apex), respectively (mean ± standard deviation, n = 6). Conclusion: Diastolic myocardial T1 mapping with use of single-shot inversion recovery FLASH offers high spatial resolution, T1 accuracy and precision, and practical robustness and speed. Advances in knowledge: The proposed method will be beneficial for clinical applications relying on native and post-contrast T1 quantitation. PMID:27759423
NASA Astrophysics Data System (ADS)
Clark, M. L.; Kilham, N. E.
2015-12-01
Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Most land-cover maps at regional to global scales are produced with remote sensing techniques applied to multispectral satellite imagery with 30-500 m pixel sizes (e.g., Landsat, MODIS). Hyperspectral, or imaging spectrometer, imagery measuring the visible to shortwave infrared regions (VSWIR) of the spectrum have shown impressive capacity to map plant species and coarser land-cover associations, yet techniques have not been widely tested at regional and greater spatial scales. The Hyperspectral Infrared Imager (HyspIRI) mission is a VSWIR hyperspectral and thermal satellite being considered for development by NASA. The goal of this study was to assess multi-temporal, HyspIRI-like satellite imagery for improved land cover mapping relative to multispectral satellites. We mapped FAO Land Cover Classification System (LCCS) classes over 22,500 km2 in the San Francisco Bay Area, California using 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery simulated from data acquired by NASA's AVIRIS airborne sensor. Random Forests (RF) and Multiple-Endmember Spectral Mixture Analysis (MESMA) classifiers were applied to the simulated images and accuracies were compared to those from real Landsat 8 images. The RF classifier was superior to MESMA, and multi-temporal data yielded higher accuracy than summer-only data. With RF, hyperspectral data had overall accuracy of 72.2% and 85.1% with full 20-class and reduced 12-class schemes, respectively. Multispectral imagery had lower accuracy. For example, simulated and real Landsat data had 7.5% and 4.6% lower accuracy than HyspIRI data with 12 classes, respectively. In summary, our results indicate increased mapping accuracy using HyspIRI multi-temporal imagery, particularly in discriminating different natural vegetation types, such as spectrally-mixed woodlands and forests.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
NASA Astrophysics Data System (ADS)
Mücher, C. A.; Roupioz, L.; Kramer, H.; Bogers, M. M. B.; Jongman, R. H. G.; Lucas, R. M.; Kosmidou, V. E.; Petrou, Z.; Manakos, I.; Padoa-Schioppa, E.; Adamo, M.; Blonda, P.
2015-05-01
A major challenge is to develop a biodiversity observation system that is cost effective and applicable in any geographic region. Measuring and reliable reporting of trends and changes in biodiversity requires amongst others detailed and accurate land cover and habitat maps in a standard and comparable way. The objective of this paper is to assess the EODHaM (EO Data for Habitat Mapping) classification results for a Dutch case study. The EODHaM system was developed within the BIO_SOS (The BIOdiversity multi-SOurce monitoring System: from Space TO Species) project and contains the decision rules for each land cover and habitat class based on spectral and height information. One of the main findings is that canopy height models, as derived from LiDAR, in combination with very high resolution satellite imagery provides a powerful input for the EODHaM system for the purpose of generic land cover and habitat mapping for any location across the globe. The assessment of the EODHaM classification results based on field data showed an overall accuracy of 74% for the land cover classes as described according to the Food and Agricultural Organization (FAO) Land Cover Classification System (LCCS) taxonomy at level 3, while the overall accuracy was lower (69.0%) for the habitat map based on the General Habitat Category (GHC) system for habitat surveillance and monitoring. A GHC habitat class is determined for each mapping unit on the basis of the composition of the individual life forms and height measurements. The classification showed very good results for forest phanerophytes (FPH) when individual life forms were analyzed in terms of their percentage coverage estimates per mapping unit from the LCCS classification and validated with field surveys. Analysis for shrubby chamaephytes (SCH) showed less accurate results, but might also be due to less accurate field estimates of percentage coverage. Overall, the EODHaM classification results encouraged us to derive the heights of all vegetated objects in the Netherlands from LiDAR data, in preparation for new habitat classifications.
Exploring Capabilities of SENTINEL-2 for Vegetation Mapping Using Random Forest
NASA Astrophysics Data System (ADS)
Saini, R.; Ghosh, S. K.
2018-04-01
Accurate vegetation mapping is essential for monitoring crop and sustainable agricultural practice. This study aims to explore the capabilities of Sentinel-2 data over Landsat-8 Operational Land Imager (OLI) data for vegetation mapping. Two combination of Sentinel-2 dataset have been considered, first combination is 4-band dataset at 10m resolution which consists of NIR, R, G and B bands, while second combination is generated by stacking 4 bands having 10 m resolution along with other six sharpened bands using Gram-Schmidt algorithm. For Landsat-8 OLI dataset, six multispectral bands have been pan-sharpened to have a spatial resolution of 15 m using Gram-Schmidt algorithm. Random Forest (RF) and Maximum Likelihood classifier (MLC) have been selected for classification of images. It is found that, overall accuracy achieved by RF for 4-band, 10-band dataset of Sentinel-2 and Landsat-8 OLI are 88.38 %, 90.05 % and 86.68 % respectively. While, MLC give an overall accuracy of 85.12 %, 87.14 % and 83.56 % for 4-band, 10-band Sentinel and Landsat-8 OLI respectively. Results shown that 10-band Sentinel-2 dataset gives highest accuracy and shows a rise of 3.37 % for RF and 3.58 % for MLC compared to Landsat-8 OLI. However, all the classes show significant improvement in accuracy but a major rise in accuracy is observed for Sugarcane, Wheat and Fodder for Sentinel 10-band imagery. This study substantiates the fact that Sentinel-2 data can be utilized for mapping of vegetation with a good degree of accuracy when compared to Landsat-8 OLI specifically when objective is to map a sub class of vegetation.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1-98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681
NASA Astrophysics Data System (ADS)
Manuputty, Agnestesya; Lumban Gaol, Jonson; Bahri Agus, Syamsul; Wayan Nurjaya, I.
2017-01-01
Seagrass perform a variety of functions within ecosystems, and have both economic and ecological values, therefore it has to be kept sustainable. One of the stages to preserve seagrass ecosystems is monitoring by utilizing thespatial data accurately. The purpose of the study was to assess and compare the accuracy of DII and PCA transformationsfor mapping of seagrass ecosystems. Fieldstudy was carried out in Karang Bongkok and Kotok Island waters, in Agustus 2014 and in March 2015. A WorldView-2 image acquisition date of 5 October 2013 was used in the study. The transformations for image processing data were Depth Invariant Index (DII) and Principle Component Analysis (PCA) using Support Vector Machine (SVM) classification. The result shows that benthic habitat mapping of Karang Bongkok using DII and PCA transformations were 72%and 81% overall’s accuracy respectively, whereas of Kotok Island were 83% and 84% overall’s accuracy respectively. There were seven benthic habitat types found in karang Bongkok waters and in Kotok Island namely seagrass, sand, rubble, coral, logoon, sand mix seagrass, and sand mix rubble. PCA transformation was effectively to improve mapping accuracy of sea grass mapping in Kotok Island and Karang Bongkok.
Accuracy assessment of seven global land cover datasets over China
NASA Astrophysics Data System (ADS)
Yang, Yongke; Xiao, Pengfeng; Feng, Xuezhi; Li, Haixing
2017-03-01
Land cover (LC) is the vital foundation to Earth science. Up to now, several global LC datasets have arisen with efforts of many scientific communities. To provide guidelines for data usage over China, nine LC maps from seven global LC datasets (IGBP DISCover, UMD, GLC, MCD12Q1, GLCNMO, CCI-LC, and GlobeLand30) were evaluated in this study. First, we compared their similarities and discrepancies in both area and spatial patterns, and analysed their inherent relations to data sources and classification schemes and methods. Next, five sets of validation sample units (VSUs) were collected to calculate their accuracy quantitatively. Further, we built a spatial analysis model and depicted their spatial variation in accuracy based on the five sets of VSUs. The results show that, there are evident discrepancies among these LC maps in both area and spatial patterns. For LC maps produced by different institutes, GLC 2000 and CCI-LC 2000 have the highest overall spatial agreement (53.8%). For LC maps produced by same institutes, overall spatial agreement of CCI-LC 2000 and 2010, and MCD12Q1 2001 and 2010 reach up to 99.8% and 73.2%, respectively; while more efforts are still needed if we hope to use these LC maps as time series data for model inputting, since both CCI-LC and MCD12Q1 fail to represent the rapid changing trend of several key LC classes in the early 21st century, in particular urban and built-up, snow and ice, water bodies, and permanent wetlands. With the highest spatial resolution, the overall accuracy of GlobeLand30 2010 is 82.39%. For the other six LC datasets with coarse resolution, CCI-LC 2010/2000 has the highest overall accuracy, and following are MCD12Q1 2010/2001, GLC 2000, GLCNMO 2008, IGBP DISCover, and UMD in turn. Beside that all maps exhibit high accuracy in homogeneous regions; local accuracies in other regions are quite different, particularly in Farming-Pastoral Zone of North China, mountains in Northeast China, and Southeast Hills. Special attention should be paid for data users who are interested in these regions.
NASA Astrophysics Data System (ADS)
Snavely, Rachel A.
Focusing on the semi-arid and highly disturbed landscape of San Clemente Island, California, this research tests the effectiveness of incorporating a hierarchal object-based image analysis (OBIA) approach with high-spatial resolution imagery and light detection and range (LiDAR) derived canopy height surfaces for mapping vegetation communities. The study is part of a large-scale research effort conducted by researchers at San Diego State University's (SDSU) Center for Earth Systems Analysis Research (CESAR) and Soil Ecology and Restoration Group (SERG), to develop an updated vegetation community map which will support both conservation and management decisions on Naval Auxiliary Landing Field (NALF) San Clemente Island. Trimble's eCognition Developer software was used to develop and generate vegetation community maps for two study sites, with and without vegetation height data as input. Overall and class-specific accuracies were calculated and compared across the two classifications. The highest overall accuracy (approximately 80%) was observed with the classification integrating airborne visible and near infrared imagery having very high spatial resolution with a LiDAR derived canopy height model. Accuracies for individual vegetation classes differed between both classification methods, but were highest when incorporating the LiDAR digital surface data. The addition of a canopy height model, however, yielded little difference in classification accuracies for areas of very dense shrub cover. Overall, the results show the utility of the OBIA approach for mapping vegetation with high spatial resolution imagery, and emphasizes the advantage of both multi-scale analysis and digital surface data for accuracy characterizing highly disturbed landscapes. The integrated imagery and digital canopy height model approach presented both advantages and limitations, which have to be considered prior to its operational use in mapping vegetation communities.
A Multitemporal, Multisensor Approach to Mapping the Canadian Boreal Forest
NASA Astrophysics Data System (ADS)
Reith, Ernest
The main anthropogenic source of CO2 emissions is the combustion of fossil fuels, while the clearing and burning of forests contribute significant amounts as well. Vegetation represents a major reservoir for terrestrial carbon stocks, and improving our ability to inventory vegetation will enhance our understanding of the impacts of land cover and climate change on carbon stocks and fluxes. These relationships may be an indication of a series of troubling biosphere-atmospheric feedback mechanisms that need to be better understood and modeled. Valuable land cover information can be provided to the global climate change modeling community using advanced remote sensing capabilities such as Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR). Individually and synergistically, data were successfully used to characterize the complex nature of the Canadian boreal forest land cover types. The multiple endmember spectral mixture analysis process was applied against seasonal AVIRIS data to produce species-level vegetated land cover maps of two study sites in the Canadian boreal forest: Old Black Spruce (OBS) and Old Jack Pine (OJP). The highest overall accuracy was assessed to be at least 66% accurate to the available reference map, providing evidence that high-quality, species-level land cover mapping of the Canadian boreal forest is achievable at accuracy levels greater than other previous research efforts in the region. Backscatter information from multichannel, polarimetric SAR utilizing a binary decision tree-based classification technique methodology was moderately successfully applied to AIRSAR to produce maps of the boreal land cover types at both sites, with overall accuracies at least 59%. A process, centered around noise whitening and principal component analysis features of the minimum noise fraction transform, was implemented to leverage synergies contained within spatially coregistered multitemporal and multisensor AVIRIS and AIRSAR data sets to successfully produce high-accuracy boreal forest land cover maps. Overall land cover map accuracies of 78% and 72% were assessed for OJP and OBS sites, respectively, for either seasonal or multitemporal data sets. High individual land cover accuracies appeared to be independent of site, season, or multisensor combination in the minimum-noise fraction-based approach.
Mapping of land cover in northern California with simulated hyperspectral satellite imagery
NASA Astrophysics Data System (ADS)
Clark, Matthew L.; Kilham, Nina E.
2016-09-01
Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Analysis of hyperspectral, or imaging spectrometer, imagery has shown an impressive capacity to map a wide range of natural and anthropogenic land cover. Applications have been mostly with single-date imagery from relatively small spatial extents. Future hyperspectral satellites will provide imagery at greater spatial and temporal scales, and there is a need to assess techniques for mapping land cover with these data. Here we used simulated multi-temporal HyspIRI satellite imagery over a 30,000 km2 area in the San Francisco Bay Area, California to assess its capabilities for mapping classes defined by the international Land Cover Classification System (LCCS). We employed a mapping methodology and analysis framework that is applicable to regional and global scales. We used the Random Forests classifier with three sets of predictor variables (reflectance, MNF, hyperspectral metrics), two temporal resolutions (summer, spring-summer-fall), two sample scales (pixel, polygon) and two levels of classification complexity (12, 20 classes). Hyperspectral metrics provided a 16.4-21.8% and 3.1-6.7% increase in overall accuracy relative to MNF and reflectance bands, respectively, depending on pixel or polygon scales of analysis. Multi-temporal metrics improved overall accuracy by 0.9-3.1% over summer metrics, yet increases were only significant at the pixel scale of analysis. Overall accuracy at pixel scales was 72.2% (Kappa 0.70) with three seasons of metrics. Anthropogenic and homogenous natural vegetation classes had relatively high confidence and producer and user accuracies were over 70%; in comparison, woodland and forest classes had considerable confusion. We next focused on plant functional types with relatively pure spectra by removing open-canopy shrublands, woodlands and mixed forests from the classification. This 12-class map had significantly improved accuracy of 85.1% (Kappa 0.83) and most classes had over 70% producer and user accuracies. Finally, we summarized important metrics from the multi-temporal Random Forests to infer the underlying chemical and structural properties that best discriminated our land-cover classes across seasons.
NASA Astrophysics Data System (ADS)
Hugenholtz, Chris H.; Whitehead, Ken; Brown, Owen W.; Barchyn, Thomas E.; Moorman, Brian J.; LeClair, Adam; Riddell, Kevin; Hamilton, Tayler
2013-07-01
Small unmanned aircraft systems (sUAS) are a relatively new type of aerial platform for acquiring high-resolution remote sensing measurements of Earth surface processes and landforms. However, despite growing application there has been little quantitative assessment of sUAS performance. Here we present results from a field experiment designed to evaluate the accuracy of a photogrammetrically-derived digital terrain model (DTM) developed from imagery acquired with a low-cost digital camera onboard an sUAS. We also show the utility of the high-resolution (0.1 m) sUAS imagery for resolving small-scale biogeomorphic features. The experiment was conducted in an area with active and stabilized aeolian landforms in the southern Canadian Prairies. Images were acquired with a Hawkeye RQ-84Z Areohawk fixed-wing sUAS. A total of 280 images were acquired along 14 flight lines, covering an area of 1.95 km2. The survey was completed in 4.5 h, including GPS surveying, sUAS setup and flight time. Standard image processing and photogrammetric techniques were used to produce a 1 m resolution DTM and a 0.1 m resolution orthorectified image mosaic. The latter revealed previously un-mapped bioturbation features. The vertical accuracy of the DTM was evaluated with 99 Real-Time Kinematic GPS points, while 20 of these points were used to quantify horizontal accuracy. The horizontal root mean squared error (RMSE) of the orthoimage was 0.18 m, while the vertical RMSE of the DTM was 0.29 m, which is equivalent to the RMSE of a bare earth LiDAR DTM for the same site. The combined error from both datasets was used to define a threshold of the minimum elevation difference that could be reliably attributed to erosion or deposition in the seven years separating the sUAS and LiDAR datasets. Overall, our results suggest that sUAS-acquired imagery may provide a low-cost, rapid, and flexible alternative to airborne LiDAR for geomorphological mapping.
Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J
2016-10-01
To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Duke-Novakovski, Tanya; Ambros, Barbara; Feng, Cindy; Carr, Anthony P
2017-05-01
To determine the accuracy of high-definition oscillometry (HDO) for arterial pressure measurement during injectable or inhalation anesthesia in horses. Prospective, clinical study. Twenty-four horses anesthetized for procedures requiring lateral recumbency. Horses were premedicated with xylazine, and anesthesia induced with diazepam-ketamine. Anesthesia was maintained with xylazine-ketamine-guaifenesin combination [TripleDrip (TD; n = 12) or isoflurane (ISO; n = 12)]. HDO was used to obtain systolic (SAP), mean (MAP) and diastolic (DAP) arterial pressures, and heart rate (HR) using an 8-cm-wide cuff around the proximal tail. Invasive blood pressure (IBP), SAP, MAP, DAP and HR were recorded during HDO cycling. Bland-Altman analysis for repeated measures was used to compare HDO and IBP for all measurements. The generalized additive model was used to determine if means in the differences between HDO and IBP were similar between anesthetic protocols for all measurements. There were >110 paired samples for each variable. There was no effect of anesthetic choice on HDO performance, but more variability was present in TD compared with ISO. Skewed data required log-transformation for statistical comparison. Using raw data and standard Bland-Altman analysis, HDO overestimated SAP (TD, 3.8 ± 28.3 mmHg; ISO, 3.5 ± 13.6 mmHg), MAP (TD, 4.0 ± 23.3 mmHg; ISO, 6.3 ± 10.0 mmHg) and DAP (TD, 4.0 ± 21.2 mmHg; ISO, 7.8 ± 13.6 mmHg). In TD, 26-40% HDO measurements were within 10 mmHg of IBP, compared with 60-74% in ISO. Differences between HDO and IBP for all measurements were similar between anesthetic protocols. The numerical difference between IBP and HDO measurements for SAP, MAP and DAP significantly decreased as cuff width:tail girth ratio increased toward 40%. More variability in HDO occurred during TD. The cuff width:tail girth ratio is important for accuracy of HDO. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
Automated artery-venous classification of retinal blood vessels based on structural mapping method
NASA Astrophysics Data System (ADS)
Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.
2012-03-01
Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.
Implications of allometric model selection for county-level biomass mapping
Laura Duncanson; Wenli Huang; Kristofer Johnson; Anu Swatantran; Ronald E. McRoberts; Ralph Dubayah
2017-01-01
Background: Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend...
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
cudaMap: a GPU accelerated program for gene expression connectivity mapping.
McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong
2013-10-11
Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.
Automating lexical cross-mapping of ICNP to SNOMED CT.
Kim, Tae Youn
2016-01-01
The purpose of this study was to examine the feasibility of automating lexical cross-mapping of a logic-based nursing terminology (ICNP) to SNOMED CT using the Unified Medical Language System (UMLS) maintained by the U.S. National Library of Medicine. A two-stage approach included patterns identification, and application and evaluation of an automated term matching procedure. The performance of the automated procedure was evaluated using a test set against a gold standard (i.e. concept equivalency table) created independently by terminology experts. There were lexical similarities between ICNP diagnostic concepts and SNOMED CT. The automated term matching procedure was reliable as presented in recall of 65%, precision of 79%, accuracy of 82%, F-measure of 0.71 and the area under the receiver operating characteristics (ROC) curve of 0.78 (95% CI 0.73-0.83). When the automated procedure was not able to retrieve lexically matched concepts, it was also unlikely for terminology experts to identify a matched SNOMED CT concept. Although further research is warranted to enhance the automated matching procedure, the combination of cross-maps from UMLS and the automated procedure is useful to generate candidate mappings and thus, assist ongoing maintenance of mappings which is a significant burden to terminology developers.
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2011-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
Toward accelerating landslide mapping with interactive machine learning techniques
NASA Astrophysics Data System (ADS)
Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne
2013-04-01
Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.
Advancing UAS methods for monitoring coastal environments
NASA Astrophysics Data System (ADS)
Ridge, J.; Seymour, A.; Rodriguez, A. B.; Dale, J.; Newton, E.; Johnston, D. W.
2017-12-01
Utilizing fixed-wing Unmanned Aircraft Systems (UAS), we are working to improve coastal monitoring by increasing the accuracy, precision, temporal resolution, and spatial coverage of habitat distribution maps. Generally, multirotor aircraft are preferred for precision imaging, but recent advances in fixed-wing technology have greatly increased their capabilities and application for fine-scale (decimeter-centimeter) measurements. Present mapping methods employed by North Carolina coastal managers involve expensive, time consuming and localized observation of coastal environments, which often lack the necessary frequency to make timely management decisions. For example, it has taken several decades to fully map oyster reefs along the NC coast, making it nearly impossible to track trends in oyster reef populations responding to harvesting pressure and water quality degradation. It is difficult for the state to employ manned flights for collecting aerial imagery to monitor intertidal oyster reefs, because flights are usually conducted after seasonal increases in turbidity. In addition, post-storm monitoring of coastal erosion from manned platforms is often conducted days after the event and collects oblique aerial photographs which are difficult to use for accurately measuring change. Here, we describe how fixed wing UAS and standard RGB sensors can be used to rapidly quantify and assess critical coastal habitats (e.g., barrier islands, oyster reefs, etc.), providing for increased temporal frequency to isolate long-term and event-driven (storms, harvesting) impacts. Furthermore, drone-based approaches can accurately image intertidal habitats as well as resolve information such as vegetation density and bathymetry from shallow submerged areas. We obtain UAS imagery of a barrier island and oyster reefs under ideal conditions (low tide, turbidity, and sun angle) to create high resolution (cm scale) maps and digital elevation models to assess habitat condition. Concurrently, we test the accuracy of UAS platforms and image analysis tools against traditional high-resolution mapping equipment (GPS and terrestrial lidar) and in situ sampling (density quadrats) to conduct error analysis of UAS orthoimagery and data processing.
NASA Technical Reports Server (NTRS)
Townsend, Philip A.; Helmers, David P.; Kingdon, Clayton C.; McNeil, Brenden E.; de Beurs, Kirsten M.; Eshleman, Keith N.
2009-01-01
Surface mining and reclamation is the dominant driver of land cover land use change (LCLUC) in the Central Appalachian Mountain region of the Eastern U.S. Accurate quantification of the extent of mining activities is important for assessing how this LCLUC affects ecosystem services such as aesthetics, biodiversity, and mitigation of flooding.We used Landsat imagery from 1976, 1987, 1999 and 2006 to map the extent of surface mines and mine reclamation for eight large watersheds in the Central Appalachian region of West Virginia, Maryland and Pennsylvania. We employed standard image processing techniques in conjunction with a temporal decision tree and GIS maps of mine permits and wetlands to map active and reclaimed mines and track changes through time. For the entire study area, active surface mine extent was highest in 1976, prior to implementation of the Surface Mine Control and Reclamation Act in 1977, with 1.76% of the study area in active mines, declining to 0.44% in 2006. The most extensively mined watershed, Georges Creek in Maryland, was 5.45% active mines in 1976, declining to 1.83% in 2006. For the entire study area, the area of reclaimed mines increased from 1.35% to 4.99% from 1976 to 2006, and from 4.71% to 15.42% in Georges Creek. Land cover conversion to mines and then reclaimed mines after 1976 was almost exclusively from forest. Accuracy levels for mined and reclaimed cover was above 85% for all time periods, and was generally above 80% for mapping active and reclaimed mines separately, especially for the later time periods in which good accuracy assessment data were available. Among other implications, the mapped patterns of LCLUC are likely to significantly affect watershed hydrology, as mined and reclaimed areas have lower infiltration capacity and thus more rapid runoff than unmined forest watersheds, leading to greater potential for extreme flooding during heavy rainfall events.
Application of an adaptive neuro-fuzzy inference system to ground subsidence hazard mapping
NASA Astrophysics Data System (ADS)
Park, Inhye; Choi, Jaewon; Jin Lee, Moung; Lee, Saro
2012-11-01
We constructed hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok City, Korea, using an adaptive neuro-fuzzy inference system (ANFIS) and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, and ground subsidence maps. An attribute database was also constructed from field investigations and reports on existing ground subsidence areas at the study site. Five major factors causing ground subsidence were extracted: (1) depth of drift; (2) distance from drift; (3) slope gradient; (4) geology; and (5) land use. The adaptive ANFIS model with different types of membership functions (MFs) was then applied for ground subsidence hazard mapping in the study area. Two ground subsidence hazard maps were prepared using the different MFs. Finally, the resulting ground subsidence hazard maps were validated using the ground subsidence test data which were not used for training the ANFIS. The validation results showed 95.12% accuracy using the generalized bell-shaped MF model and 94.94% accuracy using the Sigmoidal2 MF model. These accuracy results show that an ANFIS can be an effective tool in ground subsidence hazard mapping. Analysis of ground subsidence with the ANFIS model suggests that quantitative analysis of ground subsidence near AUCMs is possible.
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
Bourgeau-Chavez, Laura L.; Kowalski, Kurt P.; Carlson Mazur, Martha L.; Scarbrough, Kirk A.; Powell, Richard B.; Brooks, Colin N.; Huberty, Brian; Jenkins, Liza K.; Banda, Elizabeth C.; Galbraith, David M.; Laubach, Zachary M.; Riordan, Kevin
2013-01-01
The invasive variety of Phragmites australis (common reed) forms dense stands that can cause negative impacts on coastal Great Lakes wetlands including habitat degradation and reduced biological diversity. Early treatment is key to controlling Phragmites, therefore a map of the current distribution is needed. ALOS PALSAR imagery was used to produce the first basin-wide distribution map showing the extent of large, dense invasive Phragmites-dominated habitats in wetlands and other coastal ecosystems along the U.S. shore of the Great Lakes. PALSAR is a satellite imaging radar sensor that is sensitive to differences in plant biomass and inundation patterns, allowing for the detection and delineation of these tall (up to 5 m), high density, high biomass invasive Phragmites stands. Classification was based on multi-season ALOS PALSAR L-band (23 cm wavelength) HH and HV polarization data. Seasonal (spring, summer, and fall) datasets were used to improve discrimination of Phragmites by taking advantage of phenological changes in vegetation and inundation patterns over the seasons. Extensive field collections of training and randomly selected validation data were conducted in 2010–2011 to aid in mapping and for accuracy assessments. Overall basin-wide map accuracy was 87%, with 86% producer's accuracy and 43% user's accuracy for invasive Phragmites. The invasive Phragmites maps are being used to identify major environmental drivers of this invader's distribution, to assess areas vulnerable to new invasion, and to provide information to regional stakeholders through a decision support tool.
Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia
NASA Astrophysics Data System (ADS)
Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.
2016-09-01
Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
The Documentation of Historic Maps of World Heritage Site City Suzhou
NASA Astrophysics Data System (ADS)
Guangwei, Z.
2013-07-01
Documentation and analysis of historic maps enhance understanding of temporal and spatial interactions between events and the evolution of physical canals upon which they occurred. And the challenge of this work lies on carefully sifting of information through the maps drawn with relative accuracy by traditional cartographical principles before the emergence of scientific survey. This research project focuses on sorting out the evolution of historic city Suzhou in a spatio-temporal view. The investigation was conducted through an in-depth analysis of historic maps. Re-projection of the geographical elements of the city to one single georeference, that is to say a standard map BASE, help acquiring an actual sense of the scale and facilitate the recognition of the city's evolution in clear details. It is an important contribution of this thesis in coordination of variously distorted geographical information contained in nineteen periods span from 1229 to 2013 into a single research resource. Through the work both quantitative and qualitative, a clear vision of the evolution and characteristics of the urban structure of ancient Suzhou is achieved. Meanwhile, in the process of projecting the historical geometrical information onto the topographic map, historical bibliographic and cartographic records is key to the data coordination and readjustment, this inspire as well on the cautious utilization of historical materials from ancient time in the recording, documentation work.
Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J
2015-07-01
Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for all methods. For VOIs with elevated uptake, mean and SD were less than 5% except in the thorax. The use of a dedicated atlas for the pediatric patient collective resulted in improved attenuation map prediction in osseous regions and reduced interpatient bias variation in femur-adjacent VOIs. For the lungs, in which intrapatient variation was higher for the pediatric collective, a patient- or group-specific attenuation coefficient might improve attenuation map accuracy. Mean errors of -14% and -23% in bone marrow and femur-adjacent VOIs can affect PET quantification in these regions when bone tissue is ignored. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
A method to correct coordinate distortion in EBSD maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y.B., E-mail: yubz@dtu.dk; Elbrønd, A.; Lin, F.X.
2014-10-15
Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. -more » Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction.« less
CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.
Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng
2017-01-01
Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.
Speckle interferometry using fiber optic phase stepping
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Beheim, Glenn
1989-01-01
A system employing closed-loop phase-stepping is used to measure the out-of-plane deformation of a diffusely reflecting object. Optical fibers are used to provide reference and object beam illumination for a standard two-beam speckle interferometer, providing set-up flexibility and ease of alignment. Piezoelectric fiber-stretchers and a phase-measurement/servo system are used to provide highly accurate phase steps. Intensity data is captured with a charge-injection-device camera, and is converted into a phase map using a desktop computer. The closed-loop phase-stepping system provides 90 deg phase steps which are accurate to 0.02 deg, greatly improving this system relative to open-loop interferometers. The system is demonstrated on a speckle interferometer, measuring the rigid-body translation of a diffusely reflecting object with an accuracy + or - 10 deg, or roughly + or - 15 nanometers. This accuracy is achieved without the use of a pneumatically mounted optics table.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?
Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A
2010-03-01
Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.
Geospatial interpolation and mapping of tropospheric ozone pollution using geostatistics.
Kethireddy, Swatantra R; Tchounwou, Paul B; Ahmad, Hafiz A; Yerramilli, Anjaneyulu; Young, John H
2014-01-10
Tropospheric ozone (O3) pollution is a major problem worldwide, including in the United States of America (USA), particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
Forest and range mapping in the Houston area with ERTS-1
NASA Technical Reports Server (NTRS)
Heath, G. R.; Parker, H. D.
1973-01-01
ERTS-1 data acquired over the Houston area has been analyzed for applications to forest and range mapping. In the field of forestry the Sam Houston National Forest (Texas) was chosen as a test site, (Scene ID 1037-16244). Conventional imagery interpretation as well as computer processing methods were used to make classification maps of timber species, condition and land-use. The results were compared with timber stand maps which were obtained from aircraft imagery and checked in the field. The preliminary investigations show that conventional interpretation techniques indicated an accuracy in classification of 63 percent. The computer-aided interpretations made by a clustering technique gave 70 percent accuracy. Computer-aided and conventional multispectral analysis techniques were applied to range vegetation type mapping in the gulf coast marsh. Two species of salt marsh grasses were mapped.
NASA Astrophysics Data System (ADS)
Adelabu, Samuel; Mutanga, Onisimo; Adam, Elhadi; Cho, Moses Azong
2013-01-01
Classification of different tree species in semiarid areas can be challenging as a result of the change in leaf structure and orientation due to soil moisture constraints. Tree species mapping is, however, a key parameter for forest management in semiarid environments. In this study, we examined the suitability of 5-band RapidEye satellite data for the classification of five tree species in mopane woodland of Botswana using machine leaning algorithms with limited training samples.We performed classification using random forest (RF) and support vector machines (SVM) based on EnMap box. The overall accuracies for classifying the five tree species was 88.75 and 85% for both SVM and RF, respectively. We also demonstrated that the new red-edge band in the RapidEye sensor has the potential for classifying tree species in semiarid environments when integrated with other standard bands. Similarly, we observed that where there are limited training samples, SVM is preferred over RF. Finally, we demonstrated that the two accuracy measures of quantity and allocation disagreement are simpler and more helpful for the vast majority of remote sensing classification process than the kappa coefficient. Overall, high species classification can be achieved using strategically located RapidEye bands integrated with advanced processing algorithms.
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
High-Resolution airborne color video data were used to evaluate the accuracy of a land cover map of the upper San Pedro River watershed, derived from June 1997 Landsat Thematic Mapper data. The land cover map was interpreted and generated by Instituto del Medio Ambiente y el Bes...
Thomas, Cibu; Ye, Frank Q; Irfanoglu, M Okan; Modi, Pooja; Saleem, Kadharbatcha S; Leopold, David A; Pierpaoli, Carlo
2014-11-18
Tractography based on diffusion-weighted MRI (DWI) is widely used for mapping the structural connections of the human brain. Its accuracy is known to be limited by technical factors affecting in vivo data acquisition, such as noise, artifacts, and data undersampling resulting from scan time constraints. It generally is assumed that improvements in data quality and implementation of sophisticated tractography methods will lead to increasingly accurate maps of human anatomical connections. However, assessing the anatomical accuracy of DWI tractography is difficult because of the lack of independent knowledge of the true anatomical connections in humans. Here we investigate the future prospects of DWI-based connectional imaging by applying advanced tractography methods to an ex vivo DWI dataset of the macaque brain. The results of different tractography methods were compared with maps of known axonal projections from previous tracer studies in the macaque. Despite the exceptional quality of the DWI data, none of the methods demonstrated high anatomical accuracy. The methods that showed the highest sensitivity showed the lowest specificity, and vice versa. Additionally, anatomical accuracy was highly dependent upon parameters of the tractography algorithm, with different optimal values for mapping different pathways. These results suggest that there is an inherent limitation in determining long-range anatomical projections based on voxel-averaged estimates of local fiber orientation obtained from DWI data that is unlikely to be overcome by improvements in data acquisition and analysis alone.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Zytoon, Mohamed A
2016-05-13
As the traffic and other environmental noise generating activities are growing in The Kingdom of Saudi Arabia (KSA), adverse health and other impacts are expected to develop. The management of such problem involves many actions, of which noise mapping has been proven to be a helpful approach. The objective of the current study was to test the adequacy of the available data in KSA municipalities for generating urban noise maps and to verify the applicability of available environmental noise mapping and noise annoyance models for KSA. Therefore, noise maps were produced for Al-Fayha District in Jeddah City, KSA using commercially available noise mapping software and applying the French national computation method "NMPB" for traffic noise. Most of the data required for traffic noise prediction and annoyance analysis were available, either in the Municipality GIS department or in other governmental authorities. The predicted noise levels during the three time periods, i.e., daytime, evening, and nighttime, were found higher than the maximum recommended levels established in KSA environmental noise standards. Annoyance analysis revealed that high percentages of the District inhabitants were highly annoyed, depending on the type of planning zone and period of interest. These results reflect the urgent need to consider environmental noise reduction in KSA national plans. The accuracy of the predicted noise levels and the availability of most of the necessary data should encourage further studies on the use of noise mapping as part of noise reduction plans.
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Choi, Seungyeon; Kim, Hee-Joung
2016-03-01
Breast cancer is one of the most common malignancies in women. For years, mammography has been used as the gold standard for localizing breast cancer, despite its limitation in determining cancer composition. Therefore, the purpose of this simulation study is to confirm the feasibility of obtaining tumor composition using dual energy digital mammography. To generate X-ray sources for dual energy mammography, 26 kVp and 39 kVp voltages were generated for low and high energy beams, respectively. Additionally, the energy subtraction and inverse mapping functions were applied to provide compositional images. The resultant images showed that the breast composition obtained by the inverse mapping function with cubic fitting achieved the highest accuracy and least noise. Furthermore, breast density analysis with cubic fitting showed less than 10% error compare to true values. In conclusion, this study demonstrated the feasibility of creating individual compositional images and capability of analyzing breast density effectively.
The current and ideal state of anatomic pathology patient safety.
Raab, Stephen Spencer
2014-01-01
An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.
Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly
2018-01-01
Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.
A Servicewide Benthic Mapping Program for National Parks
Moses, Christopher S.; Nayegandhi, Amar; Beavers, Rebecca; Brock, John
2010-01-01
In 2007, the National Park Service (NPS) Inventory and Monitoring Program directed the initiation of a benthic habitat mapping program in ocean and coastal parks in alignment with the NPS Ocean Park Stewardship 2007-2008 Action Plan. With 74 ocean and Great Lakes parks stretching over more than 5,000 miles of coastline across 26 States and territories, this Servicewide Benthic Mapping Program (SBMP) is essential. This program will deliver benthic habitat maps and their associated inventory reports to NPS managers in a consistent, servicewide format to support informed management and protection of 3 million acres of submerged National Park System natural and cultural resources. The NPS and the U.S. Geological Survey (USGS) convened a workshop June 3-5, 2008, in Lakewood, Colo., to discuss the goals and develop the design of the NPS SBMP with an assembly of experts (Moses and others, 2010) who identified park needs and suggested best practices for inventory and mapping of bathymetry, benthic cover, geology, geomorphology, and some water-column properties. The recommended SBMP protocols include servicewide standards (such as gap analysis, minimum accuracy, final products) as well as standards that can be adapted to fit network and park unit needs (for example, minimum mapping unit, mapping priorities). SBMP Mapping Process. The SBMP calls for a multi-step mapping process for each park, beginning with a gap assessment and data mining to determine data resources and needs. An interagency announcement of intent to acquire new data will provide opportunities to leverage partnerships. Prior to new data acquisition, all involved parties should be included in a scoping meeting held at network scale. Data collection will be followed by processing and interpretation, and finally expert review and publication. After publication, all digital materials will be archived in a common format. SBMP Classification Scheme. The SBMP will map using the Coastal and Marine Ecological Classification Standard (CMECS) that is being modified to include all NPS needs, such as lacustrine ecosystems and submerged cultural resources. CMECS Version III (Madden and others, 2010) includes components for water column, biotic cover, surface geology, sub-benthic, and geoform. SBMP Data Archiving. The SBMP calls for the storage of all raw data and final products in common-use data formats. The concept of 'collect once, use often' is essential to efficient use of mapping resources. Data should also be shared with other agencies and the public through various digital clearing houses, such as Geospatial One-Stop (http://gos2.geodata.gov/wps/portal/gos). To be most useful for managing submerged resources, the SBMP advocates the inventory and mapping of the five components of marine ecosystems: surface geology, biotic cover, geoform, sub-benthic, and water column. A complete benthic inventory of a park would include maps of bathymetry and the five components of CMECS. The completion of mapping for any set of components, such as bathymetry and surface geology, or a particular theme (for example, submerged aquatic vegetation) should also include a printed report.
Testing of the Apollo 15 Metric Camera System.
NASA Technical Reports Server (NTRS)
Helmering, R. J.; Alspaugh, D. H.
1972-01-01
Description of tests conducted (1) to assess the quality of Apollo 15 Metric Camera System data and (2) to develop production procedures for total block reduction. Three strips of metric photography over the Hadley Rille area were selected for the tests. These photographs were utilized in a series of evaluation tests culminating in an orbitally constrained block triangulation solution. Results show that film deformations up to 25 and 5 microns are present in the mapping and stellar materials, respectively. Stellar reductions can provide mapping camera orientations with an accuracy that is consistent with the accuracies of other parameters in the triangulation solutions. Pointing accuracies of 4 to 10 microns can be expected for the mapping camera materials, depending on variations in resolution caused by changing sun angle conditions.
Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA
NASA Astrophysics Data System (ADS)
Donovan, J.; Singer, J.; Armstrong, J. T.
2016-12-01
Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.
Spatial predictive mapping using artificial neural networks
NASA Astrophysics Data System (ADS)
Noack, S.; Knobloch, A.; Etzold, S. H.; Barth, A.; Kallmeier, E.
2014-11-01
The modelling or prediction of complex geospatial phenomena (like formation of geo-hazards) is one of the most important tasks for geoscientists. But in practice it faces various difficulties, caused mainly by the complexity of relationships between the phenomena itself and the controlling parameters, as well by limitations of our knowledge about the nature of physical/ mathematical relationships and by restrictions regarding accuracy and availability of data. In this situation methods of artificial intelligence, like artificial neural networks (ANN) offer a meaningful alternative modelling approach compared to the exact mathematical modelling. In the past, the application of ANN technologies in geosciences was primarily limited due to difficulties to integrate it into geo-data processing algorithms. In consideration of this background, the software advangeo® was developed to provide a normal GIS user with a powerful tool to use ANNs for prediction mapping and data preparation within his standard ESRI ArcGIS environment. In many case studies, such as land use planning, geo-hazards analysis and prevention, mineral potential mapping, agriculture & forestry advangeo® has shown its capabilities and strengths. The approach is able to add considerable value to existing data.
Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter
2016-01-01
Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
A Synergy Cropland of China by Fusing Multiple Existing Maps and Statistics.
Lu, Miao; Wu, Wenbin; You, Liangzhi; Chen, Di; Zhang, Li; Yang, Peng; Tang, Huajun
2017-07-12
Accurate information on cropland extent is critical for scientific research and resource management. Several cropland products from remotely sensed datasets are available. Nevertheless, significant inconsistency exists among these products and the cropland areas estimated from these products differ considerably from statistics. In this study, we propose a hierarchical optimization synergy approach (HOSA) to develop a hybrid cropland map of China, circa 2010, by fusing five existing cropland products, i.e., GlobeLand30, Climate Change Initiative Land Cover (CCI-LC), GlobCover 2009, MODIS Collection 5 (MODIS C5), and MODIS Cropland, and sub-national statistics of cropland area. HOSA simplifies the widely used method of score assignment into two steps, including determination of optimal agreement level and identification of the best product combination. The accuracy assessment indicates that the synergy map has higher accuracy of spatial locations and better consistency with statistics than the five existing datasets individually. This suggests that the synergy approach can improve the accuracy of cropland mapping and enhance consistency with statistics.
Sanchez, Richard D.; Hothem, Larry D.
2002-01-01
High-resolution airborne and satellite image sensor systems integrated with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) may offer a quick and cost-effective way to gather accurate topographic map information without ground control or aerial triangulation. The Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing of aerial photography was used in this project to examine the positional accuracy of integrated GPS/INS for terrain mapping in Glen Canyon, Arizona. The research application in this study yielded important information on the usefulness and limits of airborne integrated GPS/INS data-capture systems for mapping.
Sea ice motion measurements from Seasat SAR images
NASA Technical Reports Server (NTRS)
Leberl, F.; Raggam, J.; Elachi, C.; Campbell, W. J.
1983-01-01
Data from the Seasat synthetic aperture radar (SAR) experiment are analyzed in order to determine the accuracy of this information for mapping the distribution of sea ice and its motion. Data from observations of sea ice in the Beaufort Sea from seven sequential orbits of the satellite were selected to study the capabilities and limitations of spaceborne radar application to sea-ice mapping. Results show that there is no difficulty in identifying homologue ice features on sequential radar images and the accuracy is entirely controlled by the accuracy of the orbit data and the geometric calibration of the sensor. Conventional radargrammetric methods are found to serve well for satellite radar ice mapping, while ground control points can be used to calibrate the ice location and motion measurements in the cases where orbit data and sensor calibration are lacking. The ice motion was determined to be approximately 6.4 + or - 0.5 km/day. In addition, the accuracy of pixel location was found over land areas. The use of one control point in 10,000 sq km produced an accuracy of about + or 150 m, while with a higher density of control points (7 in 1000 sq km) the location accuracy improves to the image resolution of + or - 25 m. This is found to be applicable for both optical and digital data.
Harnish, Roy; Prevrhal, Sven; Alavi, Abass; Zaidi, Habib; Lang, Thomas F
2014-07-01
To determine if metal artefact reduction (MAR) combined with a priori knowledge of prosthesis material composition can be applied to obtain CT-based attenuation maps with sufficient accuracy for quantitative assessment of (18)F-fluorodeoxyglucose uptake in lesions near metallic prostheses. A custom hip prosthesis phantom with a lesion-sized cavity filled with 0.2 ml (18)F-FDG solution having an activity of 3.367 MBq adjacent to a prosthesis bore was imaged twice with a chrome-cobalt steel hip prosthesis and a plastic replica, respectively. Scanning was performed on a clinical hybrid PET/CT system equipped with an additional external (137)Cs transmission source. PET emission images were reconstructed from both phantom configurations with CT-based attenuation correction (CTAC) and with CT-based attenuation correction using MAR (MARCTAC). To compare results with the attenuation-correction method extant prior to the advent of PET/CT, we also carried out attenuation correction with (137)Cs transmission-based attenuation correction (TXAC). CTAC and MARCTAC images were scaled to attenuation coefficients at 511 keV using a trilinear function that mapped the highest CT values to the prosthesis alloy attenuation coefficient. Accuracy and spatial distribution of the lesion activity was compared between the three reconstruction schemes. Compared to the reference activity of 3.37 MBq, the estimated activity quantified from the PET image corrected by TXAC was 3.41 MBq. The activity estimated from PET images corrected by MARCTAC was similar in accuracy at 3.32 MBq. CTAC corrected PET images resulted in nearly 40 % overestimation of lesion activity at 4.70 MBq. Comparison of PET images obtained with the plastic and metal prostheses in place showed that CTAC resulted in a marked distortion of the (18)F-FDG distribution within the lesion, whereas application of MARCTAC and TXAC resulted in lesion distributions similar to those observed with the plastic replica. MAR combined with a trilinear CT number mapping for PET attenuation correction resulted in estimates of lesion activity comparable in accuracy to that obtained with (137)Cs transmission-based attenuation correction, and far superior to estimates made without attenuation correction or with a standard CT attenuation map. The ability to use CT images for attenuation correction is a potentially important development because it obviates the need for a (137)Cs transmission source, which entails extra scan time, logistical complexity and expense.
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging
NASA Astrophysics Data System (ADS)
Wang, Zong; Shi, Wenjiao
2017-03-01
Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.
NASA Astrophysics Data System (ADS)
Hasaan, Zahra
2016-07-01
Remote sensing is very useful for the production of land use and land cover statistics which can be beneficial to determine the distribution of land uses. Using remote sensing techniques to develop land use classification mapping is a convenient and detailed way to improve the selection of areas designed to agricultural, urban and/or industrial areas of a region. In Islamabad city and surrounding the land use has been changing, every day new developments (urban, industrial, commercial and agricultural) are emerging leading to decrease in vegetation cover. The purpose of this work was to develop the land use of Islamabad and its surrounding area that is an important natural resource. For this work the eCognition Developer 64 computer software was used to develop a land use classification using SPOT 5 image of year 2012. For image processing object-based classification technique was used and important land use features i.e. Vegetation cover, barren land, impervious surface, built up area and water bodies were extracted on the basis of object variation and compared the results with the CDA Master Plan. The great increase was found in built-up area and impervious surface area. On the other hand vegetation cover and barren area followed a declining trend. Accuracy assessment of classification yielded 92% accuracies of the final land cover land use maps. In addition these improved land cover/land use maps which are produced by remote sensing technique of class definition, meet the growing need of legend standardization.
Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua
2017-06-01
Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3 km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method
NASA Astrophysics Data System (ADS)
Nugraha, A. L.; Awaluddin, M.; Sasmito, B.
2018-02-01
One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.
The use of Sentinel-2 imagery for seagrass mapping: Kalloni Gulf (Lesvos Island, Greece) case study
NASA Astrophysics Data System (ADS)
Topouzelis, Konstantinos; Charalampis Spondylidis, Spyridon; Papakonstantinou, Apostolos; Soulakellis, Nikolaos
2016-08-01
Seagrass meadows play a significant role in ecosystems by stabilizing sediment and improving water clarity, which enhances seagrass growing conditions. It is high on the priority of EU legislation to map and protect them. The traditional use of medium spatial resolution satellite imagery e.g. Landsat-8 (30m) is very useful for mapping seagrass meadows on a regional scale. However, the availability of Sentinel-2 data, the recent ESA's satellite with its payload Multi-Spectral Instrument (MSI) is expected to improve the mapping accuracy. MSI designed to improve coastline studies due to its enhanced spatial and spectral capabilities e.g. optical bands with 10m spatial resolution. The present work examines the quality of Sentinel-2 images for seagrass mapping, the ability of each band in detection and discrimination of different habitats and estimates the accuracy of seagrass mapping. After pre-processing steps, e.g. radiometric calibration and atmospheric correction, image classified into four classes. Classification classes included sub-bottom composition e.g. seagrass, soft bottom, and hard bottom. Concrete vectors describing the areas covered by seagrass extracted from the high-resolution satellite image and used as in situ measurements. The developed methodology applied in the Gulf of Kalloni, (Lesvos Island - Greece). Results showed that Sentinel-2 images can be robustly used for seagrass mapping due to their spatial resolution, band availability and radiometric accuracy.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
Coastal flood inundation monitoring with Satellite C-band and L-band Synthetic Aperture Radar data
Ramsey, Elijah W.; Rangoonwala, Amina; Bannister, Terri
2013-01-01
Satellite Synthetic Aperture Radar (SAR) was evaluated as a method to operationally monitor the occurrence and distribution of storm- and tidal-related flooding of spatially extensive coastal marshes within the north-central Gulf of Mexico. Maps representing the occurrence of marsh surface inundation were created from available Advanced Land Observation Satellite (ALOS) Phased Array type L-Band SAR (PALSAR) (L-band) (21 scenes with HH polarizations in Wide Beam [100 m]) data and Environmental Satellite (ENVISAT) Advanced SAR (ASAR) (C-band) data (24 scenes with VV and HH polarizations in Wide Swath [150 m]) during 2006-2009 covering 500 km of the Louisiana coastal zone. Mapping was primarily based on a decrease in backscatter between reference and target scenes, and as an extension of previous studies, the flood inundation mapping performance was assessed by the degree of correspondence between inundation mapping and inland water levels. Both PALSAR- and ASAR-based mapping at times were based on suboptimal reference scenes; however, ASAR performance seemed more sensitive to reference-scene quality and other types of scene variability. Related to water depth, PALSAR and ASAR mapping accuracies tended to be lower when water depths were shallow and increased as water levels decreased below or increased above the ground surface, but this pattern was more pronounced with ASAR. Overall, PALSAR-based inundation accuracies averaged 84% (n = 160), while ASAR-based mapping accuracies averaged 62% (n = 245).
Tabor, Rowland W.; Haugerud, Ralph A.; Haeussler, Peter J.; Clark, Kenneth P.
2011-01-01
This map is an interpretation of a 6-ft-resolution (2-m-resolution) lidar (light detection and ranging) digital elevation model combined with the geology depicted on the Geologic Map of the Wildcat Lake 7.5' quadrangle, Kitsap and Mason Counties, Washington (Haeussler and Clark, 2000). Haeussler and Clark described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Wildcat Lake 7.5' quadrangle. This map, derived from 1951 aerial photographs, has 20-ft contours, nominal horizontal resolution of approximately 40 ft (12 m), and nominal mean vertical accuracy of approximately 10 ft (3 m). Similar to many geologic maps, much of the geology in the Haeussler and Clark (2000) map-especially the distribution of surficial deposits-was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound lidar Consortium obtained a lidar-derived digital elevation model (DEM) for Kitsap Peninsula including all of the Wildcat Lake 7.5' quadrangle. This new DEM has a horizontal resolution of 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air photo stereo models have much improved the interpretation of geology in this heavily vegetated landscape, especially the distribution and relative age of some surficial deposits. Many contacts of surficial deposits are adapted unmodified or slightly modified from Haugerud (2009).
Lidar-revised geologic map of the Des Moines 7.5' quadrangle, King County, Washington
Tabor, Rowland W.; Booth, Derek B.
2017-11-06
This map is an interpretation of a modern lidar digital elevation model combined with the geology depicted on the Geologic Map of the Des Moines 7.5' Quadrangle, King County, Washington (Booth and Waldron, 2004). Booth and Waldron described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Des Moines 7.5' quadrangle. The base map that they used was originally compiled in 1943 and revised using 1990 aerial photographs; it has 25-ft contours, nominal horizontal resolution of about 40 ft (12 m), and nominal mean vertical accuracy of about 10 ft (3 m). Similar to many geologic maps, much of the geology in the Booth and Waldron (2004) map was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound Lidar Consortium obtained a lidar-derived digital elevation model (DEM) for much of the Puget Sound area, including the entire Des Moines 7.5' quadrangle. This new DEM has a horizontal resolution of about 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air-photo stereo models have much improved the interpretation of geology, even in this heavily developed area, especially the distribution and relative age of some surficial deposits. For a brief description of the light detection and ranging (lidar) remote sensing method and this data acquisition program, see Haugerud and others (2003).
Mapping urban forest tree species using IKONOS imagery: preliminary results.
Pu, Ruiliang
2011-01-01
A stepwise masking system with high-resolution IKONOS imagery was developed to identify and map urban forest tree species/groups in the City of Tampa, Florida, USA. The eight species/groups consist of sand live oak (Quercus geminata), laurel oak (Quercus laurifolia), live oak (Quercus virginiana), magnolia (Magnolia grandiflora), pine (species group), palm (species group), camphor (Cinnamomum camphora), and red maple (Acer rubrum). The system was implemented with soil-adjusted vegetation index (SAVI) threshold, textural information after running a low-pass filter, and brightness threshold of NIR band to separate tree canopies from non-vegetated areas from other vegetation types (e.g., grass/lawn) and to separate the tree canopies into sunlit and shadow areas. A maximum likelihood classifier was used to identify and map forest type and species. After IKONOS imagery was preprocessed, a total of nine spectral features were generated, including four spectral bands, three hue-intensity-saturation indices, one SAVI, and one texture image. The identified and mapped results were examined with independent ground survey data. The experimental results indicate that when classifying all the eight tree species/ groups with the high-resolution IKONOS image data, the identifying accuracy was very low and could not satisfy a practical application level, and when merging the eight species/groups into four major species/groups, the average accuracy is still low (average accuracy = 73%, overall accuracy = 86%, and κ = 0.76 with sunlit test samples). Such a low accuracy of identifying and mapping the urban tree species/groups is attributable to low spatial resolution IKONOS image data relative to tree crown size, to complex and variable background spectrum impact on crown spectra, and to shadow/shaded impact. The preliminary results imply that to improve the tree species identification accuracy and achieve a practical application level in urban area, multi-temporal (multi-seasonal) or hyperspectral data image data should be considered for use in the future.
Mapping Disturbance Dynamics in Wet Sclerophyll Forests Using Time Series Landsat
NASA Astrophysics Data System (ADS)
Haywood, A.; Verbesselt, J.; Baker, P. J.
2016-06-01
In this study, we characterised the temporal-spectral patterns associated with identifying acute-severity disturbances and low-severity disturbances between 1985 and 2011 with the objective to test whether different disturbance agents within these categories can be identified with annual Landsat time series data. We analysed a representative State forest within the Central Highlands which has been exposed to a range of disturbances over the last 30 years, including timber harvesting (clearfell, selective and thinning) and fire (wildfire and prescribed burning). We fitted spectral time series models to annual normal burn ratio (NBR) and Tasseled Cap Indices (TCI), from which we extracted a range of disturbance and recovery metrics. With these metrics, three hierarchical random forest models were trained to 1) distinguish acute-severity disturbances from low-severity disturbances; 2a) attribute the disturbance agents most likely within the acute-severity class; 2b) and attribute the disturbance agents most likely within the low-severity class. Disturbance types (acute severity and low-severity) were successfully mapped with an overall accuracy of 72.9 %, and the individual disturbance types were successfully attributed with overall accuracies ranging from 53.2 % to 64.3 %. Low-severity disturbance agents were successfully mapped with an overall accuracy of 80.2 %, and individual agents were successfully attributed with overall accuracies ranging from 25.5 % to 95.1. Acute-severity disturbance agents were successfully mapped with an overall accuracy of 95.4 %, and individual agents were successfully attributed with overall accuracies ranging from 94.2 % to 95.2 %. Spectral metrics describing the disturbance magnitude were more important for distinguishing the disturbance agents than the post-disturbance response slope. Spectral changes associated with planned burning disturbances had generally lower magnitudes than selective harvesting. This study demonstrates the potential of landsat time series mapping for fire and timber harvesting disturbances at the agent level and highlights the need for distinguishing between agents to fully capture their impacts on ecosystem processes.
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
NASA Astrophysics Data System (ADS)
Jahncke, Raymond; Leblon, Brigitte; Bush, Peter; LaRocque, Armand
2018-06-01
Wetland maps currently in use by the Province of Nova Scotia, namely the Department of Natural Resources (DNR) wetland inventory map and the swamp wetland classes of the DNR forest map, need to be updated. In this study, wetlands were mapped in an area southwest of Halifax, Nova Scotia by classifying a combination of multi-date and multi-beam RADARSAT-2 C-band polarimetric SAR (polSAR) images with spring Lidar, and fall QuickBird optical data using the Random Forests (RF) classifier. The resulting map has five wetland classes (open-water/marsh complex, open bog, open fen, shrub/treed fen/bog, swamp), plus lakes and various upland classes. Its accuracy was assessed using data from 156 GPS wetland sites collected in 2012 and compared to the one obtained with the current wetland map of Nova Scotia. The best overall classification was obtained using a combination of Lidar, RADARSAT-2 HH, HV, VH, VV intensity with polarimetric variables, and QuickBird multispectral (89.2%). The classified image was compared to GPS validation sites to assess the mapping accuracy of the wetlands. It was first done considering a group consisting of all wetland classes including lakes. This showed that only 69.9% of the wetland sites were correctly identified when only the QuickBird classified image was used in the classification. With the addition of variables derived from lidar, the number of correctly identified wetlands increased to 88.5%. The accuracy remained the same with the addition of RADARSAT-2 (88.5%). When we tested the accuracy for identifying wetland classes (e.g. marsh complex vs. open bog) instead of grouped wetlands, the resulting wetland map performed best with either QuickBird and Lidar, or QuickBird, Lidar, and RADARSAT-2 (66%). The Province of Nova Scotia's current wetland inventory and its associated wetland classes (aerial-photo interpreted) were also assessed against the GPS wetland sites. This provincial inventory correctly identified 62.2% of the grouped wetlands and only 18.6% of the wetland classes. The current inventory's poor performance demonstrates the value of incorporating a combination of new data sources into the provincial wetland mapping.
NASA Astrophysics Data System (ADS)
Akay, S. S.; Sertel, E.
2016-06-01
Urban land cover/use changes like urbanization and urban sprawl have been impacting the urban ecosystems significantly therefore determination of urban land cover/use changes is an important task to understand trends and status of urban ecosystems, to support urban planning and to aid decision-making for urban-based projects. High resolution satellite images could be used to accurately, periodically and quickly map urban land cover/use and their changes by time. This paper aims to determine urban land cover/use changes in Gaziantep city centre between 2010 and 2105 using object based images analysis and high resolution SPOT 5 and SPOT 6 images. 2.5 m SPOT 5 image obtained in 5th of June 2010 and 1.5 m SPOT 6 image obtained in 7th of July 2015 were used in this research to precisely determine land changes in five-year period. In addition to satellite images, various ancillary data namely Normalized Difference Vegetation Index (NDVI), Difference Water Index (NDWI) maps, cadastral maps, OpenStreetMaps, road maps and Land Cover maps, were integrated into the classification process to produce high accuracy urban land cover/use maps for these two years. Both images were geometrically corrected to fulfil the 1/10,000 scale geometric accuracy. Decision tree based object oriented classification was applied to identify twenty different urban land cover/use classes defined in European Urban Atlas project. Not only satellite images and satellite image-derived indices but also different thematic maps were integrated into decision tree analysis to create rule sets for accurate mapping of each class. Rule sets of each satellite image for the object based classification involves spectral, spatial and geometric parameter to automatically produce urban map of the city centre region. Total area of each class per related year and their changes in five-year period were determined and change trend in terms of class transformation were presented. Classification accuracy assessment was conducted by creating a confusion matrix to illustrate the thematic accuracy of each class.
Accuracy assessment of ALOS optical instruments: PRISM and AVNIR-2
NASA Astrophysics Data System (ADS)
Tadono, Takeo; Shimada, Masanobu; Iwata, Takanori; Takaku, Junichi; Kawamoto, Sachi
2017-11-01
This paper describes the updated results of calibration and validation to assess the accuracies for optical instruments onboard the Advanced Land Observing Satellite (ALOS, nicknamed "Daichi"), which was successfully launched on January 24th, 2006 and it is continuously operating very well. ALOS has an L-band Synthetic Aperture Radar called PALSAR and two optical instruments i.e. the Panchromatic Remotesensing Instrument for Stereo Mapping (PRISM) and the Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2). PRISM consists of three radiometers and is used to derive a digital surface model (DSM) with high spatial resolution that is an objective of the ALOS mission. Therefore, geometric calibration is important in generating a precise DSM with stereo pair images of PRISM. AVNIR-2 has four radiometric bands from blue to near infrared and uses for regional environment and disaster monitoring etc. The radiometric calibration and image quality evaluation are also important for AVNIR-2 as well as PRISM. This paper describes updated results of geometric calibration including geolocation determination accuracy evaluations of PRISM and AVNIR-2, image quality evaluation of PRISM, and validation of generated PRISM DSM. These works will be done during the ALOS mission life as an operational calibration to keep absolute accuracies of the standard products.
NASA Astrophysics Data System (ADS)
Adams, Marc S.; Bühler, Yves; Fromm, Reinhard
2017-12-01
Reliable and timely information on the spatio-temporal distribution of snow in alpine terrain plays an important role for a wide range of applications. Unmanned aerial system (UAS) photogrammetry is increasingly applied to cost-efficiently map the snow depth at very high resolution with flexible applicability. However, crucial questions regarding quality and repeatability of this technique are still under discussion. Here we present a multitemporal accuracy and precision assessment of UAS photogrammetry for snow depth mapping on the slope-scale. We mapped a 0.12 km2 large snow-covered study site, located in a high-alpine valley in Western Austria. 12 UAS flights were performed to acquire imagery at 0.05 m ground sampling distance in visible (VIS) and near-infrared (NIR) wavelengths with a modified commercial, off-the-shelf sensor mounted on a custom-built fixed-wing UAS. The imagery was processed with structure-from-motion photogrammetry software to generate orthophotos, digital surface models (DSMs) and snow depth maps (SDMs). Accuracy of DSMs and SDMs were assessed with terrestrial laser scanning and manual snow depth probing, respectively. The results show that under good illumination conditions (study site in full sunlight), the DSMs and SDMs were acquired with an accuracy of ≤ 0.25 and ≤ 0.29 m (both at 1σ), respectively. In case of poorly illuminated snow surfaces (study site shadowed), the NIR imagery provided higher accuracy (0.19 m; 0.23 m) than VIS imagery (0.49 m; 0.37 m). The precision of the UASSDMs was 0.04 m for a small, stable area and below 0.33 m for the whole study site (both at 1σ).
An Atlas of ShakeMaps for Landslide and Liquefaction Modeling
NASA Astrophysics Data System (ADS)
Johnson, K. L.; Nowicki, M. A.; Mah, R. T.; Garcia, D.; Harp, E. L.; Godt, J. W.; Lin, K.; Wald, D. J.
2012-12-01
The human consequences of a seismic event are often a result of subsequent hazards induced by the earthquake, such as landslides. While the United States Geological Survey (USGS) ShakeMap and Prompt Assessment of Global Earthquakes for Response (PAGER) systems are, in conjunction, capable of estimating the damage potential of earthquake shaking in near-real time, they do not currently provide estimates for the potential of further damage by secondary processes. We are developing a sound basis for providing estimates of the likelihood and spatial distribution of landslides for any global earthquake under the PAGER system. Here we discuss several important ingredients in this effort. First, we report on the development of a standardized hazard layer from which to calibrate observed landslide distributions; in contrast, prior studies have used a wide variety of means for estimating the hazard input. This layer now takes the form of a ShakeMap, a standardized approach for computing geospatial estimates for a variety of shaking metrics (both peak ground motions and shaking intensity) from any well-recorded earthquake. We have created ShakeMaps for about 20 historical landslide "case history" events, significant in terms of their landslide occurrence, as part of an updated release of the USGS ShakeMap Atlas. We have also collected digitized landslide data from open-source databases for many of the earthquake events of interest. When these are combined with up-to-date topographic and geologic maps, we have the basic ingredients for calibrating landslide probabilities for a significant collection of earthquakes. In terms of modeling, rather than focusing on mechanistic models of landsliding, we adopt a strictly statistical approach to quantify landslide likelihood. We incorporate geology, slope, peak ground acceleration, and landslide data as variables in a logistic regression, selecting the best explanatory variables given the standardized new hazard layers (see Nowicki et al., this meeting, for more detail on the regression). To make the ShakeMap and PAGER systems more comprehensive in terms of secondary losses, we are working to calibrate a similarly constrained regression for liquefaction estimation using a suite of well-studied earthquakes for which detailed, digitized liquefaction datasets are available; here variants of wetness index and soil strength replace geology and slope. We expect that this Atlas of ShakeMaps for landslide and liquefaction case history events, which will soon be publicly available via the internet, will aid in improving the accuracy of loss-modeling systems such as PAGER, as well as allow for a common framework for numerous other mechanistic and empirical studies.
NASA Technical Reports Server (NTRS)
Rudasill-Neigh, Christopher S.; Bolton, Douglas K.; Diabate, Mouhamad; Williams, Jennifer J.; Carvalhais, Nuno
2014-01-01
Forests contain a majority of the aboveground carbon (C) found in ecosystems, and understanding biomass lost from disturbance is essential to improve our C-cycle knowledge. Our study region in the Wisconsin and Minnesota Laurentian Forest had a strong decline in Normalized Difference Vegetation Index (NDVI) from 1982 to 2007, observed with the National Ocean and Atmospheric Administration's (NOAA) series of Advanced Very High Resolution Radiometer (AVHRR). To understand the potential role of disturbances in the terrestrial C-cycle, we developed an algorithm to map forest disturbances from either harvest or insect outbreak for Landsat time-series stacks. We merged two image analysis approaches into one algorithm to monitor forest change that included: (1) multiple disturbance index thresholds to capture clear-cut harvest; and (2) a spectral trajectory-based image analysis with multiple confidence interval thresholds to map insect outbreak. We produced 20 maps and evaluated classification accuracy with air-photos and insect air-survey data to understand the performance of our algorithm. We achieved overall accuracies ranging from 65% to 75%, with an average accuracy of 72%. The producer's and user's accuracy ranged from a maximum of 32% to 70% for insect disturbance, 60% to 76% for insect mortality and 82% to 88% for harvested forest, which was the dominant disturbance agent. Forest disturbances accounted for 22% of total forested area (7349 km2). Our algorithm provides a basic approach to map disturbance history where large impacts to forest stands have occurred and highlights the limited spectral sensitivity of Landsat time-series to outbreaks of defoliating insects. We found that only harvest and insect mortality events can be mapped with adequate accuracy with a non-annual Landsat time-series. This limited our land cover understanding of NDVI decline drivers. We demonstrate that to capture more subtle disturbances with spectral trajectories, future observations must be temporally dense to distinguish between type and frequency in heterogeneous landscapes.
NASA Astrophysics Data System (ADS)
Bratic, G.; Brovelli, M. A.; Molinari, M. E.
2018-04-01
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
Li, Wenhuan; Zhu, Xiaolian; Li, Jing; Peng, Cheng; Chen, Nan; Qi, Zhigang; Yang, Qi; Gao, Yan; Zhao, Yang; Sun, Kai; Li, Kuncheng
2014-12-01
The sensitivity and specificity of 5 different image sets of dual-energy computed tomography (DECT) for the detection of first-pass myocardial perfusion defects have not systematically been compared using positron emission tomography (PET) as a reference standard. Forty-nine consecutive patients, with known or strongly suspected of coronary artery disease, were prospectively enrolled in our study. Cardiac DECT was performed at rest state using a second-generation 128-slice dual-source CT. The DECT data were reconstructed to iodine maps, monoenergetic images, 100 kV images, nonlinearly blended images, and linearly blended images by different postprocessing techniques. The myocardial perfusion defects on DECT images were visually assessed by 5 observers, using standard 17-segment model. Diagnostic accuracy of 5 image sets was assessed using nitrogen-13 ammonia PET as the gold standard. Discrimination was quantified using the area under the receiver operating characteristic curve (AUC), and AUCs were compared using the method of DeLong. The DECT and PET examinations were successfully completed in 30 patients and a total of 90 territories and 510 segments were analyzed. Cardiac PET revealed myocardial perfusion defects in 56 territories (62%) and 209 segments (41%). The AUC of iodine maps, monoenergetic images, 100 kV images, nonlinearly blended images, and linearly blended images were 0.986, 0.934, 0.913, 0.881, and 0.871, respectively, on a per-territory basis. These values were 0.922, 0.813, 0.779, 0.763, and 0.728, respectively, on a per-segment basis. DECT iodine maps shows high sensitivity and specificity, and is superior to other DECT image sets for the detection of myocardial perfusion defects in the first-pass myocardial perfusion.
Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.
2003-01-01
The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.
Automating the selection of standard parallels for conic map projections
NASA Astrophysics Data System (ADS)
Šavriǒ, Bojan; Jenny, Bernhard
2016-05-01
Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.
Researchermap: a tool for visualizing author locations using Google maps.
Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong
2013-01-01
We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.
Mapping river bathymetry with a small footprint green LiDAR: Applications and challenges
Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.
2013-01-01
that environmental conditions and postprocessing algorithms can influence the accuracy and utility of these surveys and must be given consideration. These factors can lead to mapping errors that can have a direct bearing on derivative analyses such as hydraulic modeling and habitat assessment. We discuss the water and substrate characteristics of the sites, compare the conventional and remotely sensed river-bed topographies, and investigate the laser waveforms reflected from submerged targets to provide an evaluation as to the suitability and accuracy of the EAARL system and associated processing algorithms for riverine mapping applications.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Zhiliang Zhu; Limin Yang; Stephen V. Stehman; Raymond L. Czaplewski
2000-01-01
The U.S. Geological Survey, in cooperation with other government and private organizations, is producing a conterminous U.S. land-cover map using Landsat Thematic Mapper 30-meter data for the Federal regions designated by the U.S. Environmental Protection Agency. Accuracy assessment is to be conducted for each Federal region to estimate overall and class-specific...
Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010
2010-12-14
SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the
Raymond L. Czaplewski
2000-01-01
Consider the following example of an accuracy assessment. Landsat data are used to build a thematic map of land cover for a multicounty region. The map classifier (e.g., a supervised classification algorithm) assigns each pixel into one category of land cover. The classification system includes 12 different types of forest and land cover: black spruce, balsam fir,...
NASA Astrophysics Data System (ADS)
Blaser, S.; Nebiker, S.; Cavegn, S.
2017-05-01
Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Thieler, E. Robert; Danforth, William W.
1994-01-01
A new, state-of-the-art method for mapping historical shorelines from maps and aerial photographs, the Digital Shoreline Mapping System (DSMS), has been developed. The DSMS is a freely available, public domain software package that meets the cartographic and photogrammetric requirements of precise coastal mapping, and provides a means to quantify and analyze different sources of error in the mapping process. The DSMS is also capable of resolving imperfections in aerial photography that commonly are assumed to be nonexistent. The DSMS utilizes commonly available computer hardware and software, and permits the entire shoreline mapping process to be executed rapidly by a single person in a small lab. The DSMS generates output shoreline position data that are compatible with a variety of Geographic Information Systems (GIS). A second suite of programs, the Digital Shoreline Analysis System (DSAS) has been developed to calculate shoreline rates-of-change from a series of shoreline data residing in a GIS. Four rate-of-change statistics are calculated simultaneously (end-point rate, average of rates, linear regression and jackknife) at a user-specified interval along the shoreline using a measurement baseline approach. An example of DSMS and DSAS application using historical maps and air photos of Punta Uvero, Puerto Rico provides a basis for assessing the errors associated with the source materials as well as the accuracy of computed shoreline positions and erosion rates. The maps and photos used here represent a common situation in shoreline mapping: marginal-quality source materials. The maps and photos are near the usable upper limit of scale and accuracy, yet the shoreline positions are still accurate ±9.25 m when all sources of error are considered. This level of accuracy yields a resolution of ±0.51 m/yr for shoreline rates-of-change in this example, and is sufficient to identify the short-term trend (36 years) of shoreline change in the study area.
Template‐based field map prediction for rapid whole brain B0 shimming
Shi, Yuhang; Vannesjo, S. Johanna; Miller, Karla L.
2017-01-01
Purpose In typical MRI protocols, time is spent acquiring a field map to calculate the shim settings for best image quality. We propose a fast template‐based field map prediction method that yields near‐optimal shims without measuring the field. Methods The template‐based prediction method uses prior knowledge of the B0 distribution in the human brain, based on a large database of field maps acquired from different subjects, together with subject‐specific structural information from a quick localizer scan. The shimming performance of using the template‐based prediction is evaluated in comparison to a range of potential fast shimming methods. Results Static B0 shimming based on predicted field maps performed almost as well as shimming based on individually measured field maps. In experimental evaluations at 7 T, the proposed approach yielded a residual field standard deviation in the brain of on average 59 Hz, compared with 50 Hz using measured field maps and 176 Hz using no subject‐specific shim. Conclusions This work demonstrates that shimming based on predicted field maps is feasible. The field map prediction accuracy could potentially be further improved by generating the template from a subset of subjects, based on parameters such as head rotation and body mass index. Magn Reson Med 80:171–180, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29193340
GIM-TEC adaptive ionospheric weather assessment and forecast system
NASA Astrophysics Data System (ADS)
Gulyaeva, T. L.; Arikan, F.; Hernandez-Pajares, M.; Stanislawska, I.
2013-09-01
The Ionospheric Weather Assessment and Forecast (IWAF) system is a computer software package designed to assess and predict the world-wide representation of 3-D electron density profiles from the Global Ionospheric Maps of Total Electron Content (GIM-TEC). The unique system products include daily-hourly numerical global maps of the F2 layer critical frequency (foF2) and the peak height (hmF2) generated with the International Reference Ionosphere extended to the plasmasphere, IRI-Plas, upgraded by importing the daily-hourly GIM-TEC as a new model driving parameter. Since GIM-TEC maps are provided with 1- or 2-days latency, the global maps forecast for 1 day and 2 days ahead are derived using an harmonic analysis applied to the temporal changes of TEC, foF2 and hmF2 at 5112 grid points of a map encapsulated in IONEX format (-87.5°:2.5°:87.5°N in latitude, -180°:5°:180°E in longitude). The system provides online the ionospheric disturbance warnings in the global W-index map establishing categories of the ionospheric weather from the quiet state (W=±1) to intense storm (W=±4) according to the thresholds set for instant TEC perturbations regarding quiet reference median for the preceding 7 days. The accuracy of IWAF system predictions of TEC, foF2 and hmF2 maps is superior to the standard persistence model with prediction equal to the most recent ‘true’ map. The paper presents outcomes of the new service expressed by the global ionospheric foF2, hmF2 and W-index maps demonstrating the process of origin and propagation of positive and negative ionosphere disturbances in space and time and their forecast under different scenarios.
EnviroAtlas -- Fresno, California -- One Meter Resolution Urban Land Cover Data (2010)
The Fresno, CA EnviroAtlas One-Meter-scale Urban Land Cover Data were generated via supervised classification of combined aerial photography and LiDAR data. The air photos were United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1-m spatial resolution. Aerial photography ('imagery') was collected on multiple dates in summer 2010. Seven land cover classes were mapped: Water, impervious surfaces (Impervious), soil and barren (Soil), trees and forest (Tree), and grass and herbaceous non-woody vegetation (Grass), agriculture (Ag), and Orchards. An accuracy assessment of 500 completely random and 103 stratified random points yielded an overall User's fuzzy accuracy of 81.1 percent (see below). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Fresno, CA plus a 1-km buffer. Where imagery was available, additional areas outside the 1-km boundary were also mapped but not included in the accuracy assessment. We expect the accuracy of the areas outside of the 1-km boundary to be consistent with those within. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The da
EnviroAtlas -- Fresno, California -- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The Fresno, CA EnviroAtlas One-Meter-scale Urban Land Cover Data were generated via supervised classification of combined aerial photography and LiDAR data. The air photos were United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1-m spatial resolution. Aerial photography ('imagery') was collected on multiple dates in summer 2010. Seven land cover classes were mapped: Water, impervious surfaces (Impervious), soil and barren (Soil), trees and forest (Tree), and grass and herbaceous non-woody vegetation (Grass), agriculture (Ag), and Orchards. An accuracy assessment of 500 completely random and 103 stratified random points yielded an overall User's fuzzy accuracy of 81.1 percent (see below). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Fresno, CA plus a 1-km buffer. Where imagery was available, additional areas outside the 1-km boundary were also mapped but not included in the accuracy assessment. We expect the accuracy of the areas outside of the 1-km boundary to be consistent with those within. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with
Maize Cropping Systems Mapping Using RapidEye Observations in Agro-Ecological Landscapes in Kenya.
Richard, Kyalo; Abdel-Rahman, Elfatih M; Subramanian, Sevgan; Nyasani, Johnson O; Thiel, Michael; Jozani, Hosein; Borgemeister, Christian; Landmann, Tobias
2017-11-03
Cropping systems information on explicit scales is an important but rarely available variable in many crops modeling routines and of utmost importance for understanding pests and disease propagation mechanisms in agro-ecological landscapes. In this study, high spatial and temporal resolution RapidEye bio-temporal data were utilized within a novel 2-step hierarchical random forest (RF) classification approach to map areas of mono- and mixed maize cropping systems. A small-scale maize farming site in Machakos County, Kenya was used as a study site. Within the study site, field data was collected during the satellite acquisition period on general land use/land cover (LULC) and the two cropping systems. Firstly, non-cropland areas were masked out from other land use/land cover using the LULC mapping result. Subsequently an optimized RF model was applied to the cropland layer to map the two cropping systems (2nd classification step). An overall accuracy of 93% was attained for the LULC classification, while the class accuracies (PA: producer's accuracy and UA: user's accuracy) for the two cropping systems were consistently above 85%. We concluded that explicit mapping of different cropping systems is feasible in complex and highly fragmented agro-ecological landscapes if high resolution and multi-temporal satellite data such as 5 m RapidEye data is employed. Further research is needed on the feasibility of using freely available 10-20 m Sentinel-2 data for wide-area assessment of cropping systems as an important variable in numerous crop productivity models.
NASA Astrophysics Data System (ADS)
Bellini, A.; Anderson, J.; van der Marel, R. P.; King, I. R.; Piotto, G.; Bedin, L. R.
2017-06-01
We take advantage of the exquisite quality of the Hubble Space Telescope astro-photometric catalog of the core of ωCen presented in the first paper of this series to derive a high-resolution, high-precision, high-accuracy differential-reddening map of the field. The map has a spatial resolution of 2 × 2 arcsec2 over a total field of view of about 4.‧3 × 4.‧3. The differential reddening itself is estimated via an iterative procedure using five distinct color-magnitude diagrams, which provided consistent results to within the 0.1% level. Assuming an average reddening value E(B - V) = 0.12, the differential reddening within the cluster’s core can vary by up to ±10%, with a typical standard deviation of about 4%. Our differential-reddening map is made available to the astronomical community in the form of a multi-extension FITS file. This differential-reddening map is essential for a detailed understanding of the multiple stellar populations of ωCen, as presented in the next paper in this series. Moreover, it provides unique insight into the level of small spatial-scale extinction variations in the Galactic foreground. Based on archival observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
Digital terrain tapes: user guide
,
1980-01-01
DMATC's digital terrain tapes are a by-product of the agency's efforts to streamline the production of raised-relief maps. In the early 1960's DMATC developed the Digital Graphics Recorder (DGR) system that introduced new digitizing techniques and processing methods into the field of three-dimensional mapping. The DGR system consisted of an automatic digitizing table and a computer system that recorded a grid of terrain elevations from traces of the contour lines on standard topographic maps. A sequence of computer accuracy checks was performed and then the elevations of grid points not intersected by contour lines were interpolated. The DGR system produced computer magnetic tapes which controlled the carving of plaster forms used to mold raised-relief maps. It was realized almost immediately that this relatively simple tool for carving plaster molds had enormous potential for storing, manipulating, and selectively displaying (either graphically or numerically) a vast number of terrain elevations. As the demand for the digital terrain tapes increased, DMATC began developing increasingly advanced digitizing systems and now operates the Digital Topographic Data Collection System (DTDCS). With DTDCS, two types of data elevations as contour lines and points, and stream and ridge lines are sorted, matched, and resorted to obtain a grid of elevation values for every 0.01 inch on each map (approximately 200 feet on the ground). Undefined points on the grid are found by either linear or or planar interpolation.
Accuracy and coverage of the modernized Polish Maritime differential GPS system
NASA Astrophysics Data System (ADS)
Specht, Cezary
2011-01-01
The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.
Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.
Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu
2015-01-01
Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.
Lee, Cholyoung; Kim, Kyehyun; Lee, Hyuk
2018-01-15
Impervious surfaces are mainly artificial structures such as rooftops, roads, and parking lots that are covered by impenetrable materials. These surfaces are becoming the major causes of nonpoint source (NPS) pollution in urban areas. The rapid progress of urban development is increasing the total amount of impervious surfaces and NPS pollution. Therefore, many cities worldwide have adopted a stormwater utility fee (SUF) that generates funds needed to manage NPS pollution. The amount of SUF is estimated based on the impervious ratio, which is calculated by dividing the total impervious surface area by the net area of an individual land parcel. Hence, in order to identify the exact impervious ratio, large-scale impervious surface maps (ISMs) are necessary. This study proposes and assesses various methods for generating large-scale ISMs for urban areas by using existing GIS data. Bupyeong-gu, a district in the city of Incheon, South Korea, was selected as the study area. Spatial data that were freely offered by national/local governments in S. Korea were collected. First, three types of ISMs were generated by using the land-cover map, digital topographic map, and orthophotographs, to validate three methods that had been proposed conceptually by Korea Environment Corporation. Then, to generate an ISM of higher accuracy, an integration method using all data was proposed. Error matrices were made and Kappa statistics were calculated to evaluate the accuracy. Overlay analyses were performed to examine the distribution of misclassified areas. From the results, the integration method delivered the highest accuracy (Kappa statistic of 0.99) compared to the three methods that use a single type of spatial data. However, a longer production time and higher cost were limiting factors. Among the three methods using a single type of data, the land-cover map showed the highest accuracy with a Kappa statistic of 0.91. Thus, it was judged that the mapping method using the land-cover map is more appropriate than the others. In conclusion, it is desirable to apply the integration method when generating the ISM with the highest accuracy. However, if time and cost are constrained, it would be effective to primarily use the land-cover map. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I.; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S.; Henry, Roland G.
2013-01-01
Introduction Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. Methods We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm2) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. Results We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p < 0.001) and the probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). Discussion This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and intra-operative management decisions. PMID:24273719
Log-polar mapping-based scale space tracking with adaptive target response
NASA Astrophysics Data System (ADS)
Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing
2017-05-01
Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.
Multiscale Reconstruction for Magnetic Resonance Fingerprinting
Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.
2015-01-01
Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch
Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.« less
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib
2016-03-01
Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.
Vanderhoof, Melanie; Distler, Hayley; Mendiola, Di Ana; Lang, Megan
2017-01-01
Natural variability in surface-water extent and associated characteristics presents a challenge to gathering timely, accurate information, particularly in environments that are dominated by small and/or forested wetlands. This study mapped inundation extent across the Upper Choptank River Watershed on the Delmarva Peninsula, occurring within both Maryland and Delaware. We integrated six quad-polarized Radarsat-2 images, Worldview-3 imagery, and an enhanced topographic wetness index in a random forest model. Output maps were filtered using light detection and ranging (lidar)-derived depressions to maximize the accuracy of forested inundation extent. Overall accuracy within the integrated and filtered model was 94.3%, with 5.5% and 6.0% errors of omission and commission for inundation, respectively. Accuracy of inundation maps obtained using Radarsat-2 alone were likely detrimentally affected by less than ideal angles of incidence and recent precipitation, but were likely improved by targeting the period between snowmelt and leaf-out for imagery collection. Across the six Radarsat-2 dates, filtering inundation outputs by lidar-derived depressions slightly elevated errors of omission for water (+1.0%), but decreased errors of commission (−7.8%), resulting in an average increase of 5.4% in overall accuracy. Depressions were derived from lidar datasets collected under both dry and average wetness conditions. Although antecedent wetness conditions influenced the abundance and total area mapped as depression, the two versions of the depression datasets showed a similar ability to reduce error in the inundation maps. Accurate mapping of surface water is critical to predicting and monitoring the effect of human-induced change and interannual variability on water quantity and quality.
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2014-11-01
This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.
NASA Astrophysics Data System (ADS)
Sturm, M.; Nolan, M.; Larsen, C. F.
2014-12-01
A long-standing goal in snow hydrology has been to map snow cover in detail, either mapping snow depth or snow water equivalent (SWE) with sub-meter resolution. Airborne LiDAR and air photogrammetry have been used successfully for this purpose, but both require significant investments in equipment and substantial processing effort. Here we detail a relatively inexpensive and simple airborne photogrammetric technique that can be used to measure snow depth. The main airborne hardware consists of a consumer-grade digital camera attached to a survey-quality, dual-frequency GPS. Photogrammetric processing is done using commercially available Structure from Motion (SfM) software that does not require ground control points. Digital elevation models (DEMs) are made from snow-free acquisitions in the summer and snow-covered acquisitions in winter, and the maps are then differenced to arrive at snow thickness. We tested the accuracy and precision of snow depths measured using this system through 1) a comparison with airborne scanning LiDAR, 2) a comparison of results from two independent and slightly different photogrameteric systems, and 3) comparison to extensive on-the-ground measured snow depths. Vertical accuracy and precision are on the order of +/-30 cm and +/- 8 cm, respectively. The accuracy can be made to approach that of the precision if suitable snow-free ground control points exists and are used to co-register summer to winter DEM maps. Final snow depth accuracy from our series of tests was on the order of ±15 cm. This photogrammetric method substantially lowers the economic and expertise barriers to entry for mapping snow.
Zytoon, Mohamed A.
2016-01-01
As the traffic and other environmental noise generating activities are growing in The Kingdom of Saudi Arabia (KSA), adverse health and other impacts are expected to develop. The management of such problem involves many actions, of which noise mapping has been proven to be a helpful approach. The objective of the current study was to test the adequacy of the available data in KSA municipalities for generating urban noise maps and to verify the applicability of available environmental noise mapping and noise annoyance models for KSA. Therefore, noise maps were produced for Al-Fayha District in Jeddah City, KSA using commercially available noise mapping software and applying the French national computation method “NMPB” for traffic noise. Most of the data required for traffic noise prediction and annoyance analysis were available, either in the Municipality GIS department or in other governmental authorities. The predicted noise levels during the three time periods, i.e., daytime, evening, and nighttime, were found higher than the maximum recommended levels established in KSA environmental noise standards. Annoyance analysis revealed that high percentages of the District inhabitants were highly annoyed, depending on the type of planning zone and period of interest. These results reflect the urgent need to consider environmental noise reduction in KSA national plans. The accuracy of the predicted noise levels and the availability of most of the necessary data should encourage further studies on the use of noise mapping as part of noise reduction plans. PMID:27187438
Schuenke, Patrick; Windschuh, Johannes; Roeloffs, Volkert; Ladd, Mark E; Bachert, Peter; Zaiss, Moritz
2017-02-01
Together with the development of MRI contrasts that are inherently small in their magnitude, increased magnetic field accuracy is also required. Hence, mapping of the static magnetic field (B 0 ) and the excitation field (B 1 ) is not only important to feedback shim algorithms, but also for postprocess contrast-correction procedures. A novel field-inhomogeneity mapping method is presented that allows simultaneous mapping of the water shift and B 1 (WASABI) using an off-resonant rectangular preparation pulse. The induced Rabi oscillations lead to a sinc-like spectrum in the frequency-offset dimension and allow for determination of B 0 by its symmetry axis and of B 1 by its oscillation frequency. Stability of the WASABI method with regard to the influences of T 1 , T 2 , magnetization transfer, and repetition time was investigated and its convergence interval was verified. B 0 and B 1 maps obtained simultaneously by means of WASABI in the human brain at 3 T and 7 T can compete well with maps obtained by standard methods. Finally, the method was applied successfully for B 0 and B 1 correction of chemical exchange saturation transfer MRI (CEST-MRI) data of the human brain. The proposed WASABI method yields a novel simultaneous B 0 and B 1 mapping within 1 min that is robust and easy to implement. Magn Reson Med 77:571-580, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N
2015-03-01
Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available.
Conducting Retrospective Ontological Clinical Trials in ICD-9-CM in the Age of ICD-10-CM.
Venepalli, Neeta K; Shergill, Ardaman; Dorestani, Parvaneh; Boyd, Andrew D
2014-01-01
To quantify the impact of International Classification of Disease 10th Revision Clinical Modification (ICD-10-CM) transition in cancer clinical trials by comparing coding accuracy and data discontinuity in backward ICD-10-CM to ICD-9-CM mapping via two tools, and to develop a standard ICD-9-CM and ICD-10-CM bridging methodology for retrospective analyses. While the transition to ICD-10-CM has been delayed until October 2015, its impact on cancer-related studies utilizing ICD-9-CM diagnoses has been inadequately explored. Three high impact journals with broad national and international readerships were reviewed for cancer-related studies utilizing ICD-9-CM diagnoses codes in study design, methods, or results. Forward ICD-9-CM to ICD-10-CM mapping was performing using a translational methodology with the Motif web portal ICD-9-CM conversion tool. Backward mapping from ICD-10-CM to ICD-9-CM was performed using both Centers for Medicare and Medicaid Services (CMS) general equivalence mappings (GEMs) files and the Motif web portal tool. Generated ICD-9-CM codes were compared with the original ICD-9-CM codes to assess data accuracy and discontinuity. While both methods yielded additional ICD-9-CM codes, the CMS GEMs method provided incomplete coverage with 16 of the original ICD-9-CM codes missing, whereas the Motif web portal method provided complete coverage. Of these 16 codes, 12 ICD-9-CM codes were present in 2010 Illinois Medicaid data, and accounted for 0.52% of patient encounters and 0.35% of total Medicaid reimbursements. Extraneous ICD-9-CM codes from both methods (Centers for Medicare and Medicaid Services general equivalent mapping [CMS GEMs, n = 161; Motif web portal, n = 246]) in excess of original ICD-9-CM codes accounted for 2.1% and 2.3% of total patient encounters and 3.4% and 4.1% of total Medicaid reimbursements from the 2010 Illinois Medicare database. Longitudinal data analyses post-ICD-10-CM transition will require backward ICD-10-CM to ICD-9-CM coding, and data comparison for accuracy. Researchers must be aware that all methods for backward coding are not comparable in yielding original ICD-9-CM codes. The mandated delay is an opportunity for organizations to better understand areas of financial risk with regards to data management via backward coding. Our methodology is relevant for all healthcare-related coding data, and can be replicated by organizations as a strategy to mitigate financial risk.
Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration
Chen, Shoubin; Liu, Jingbin; Huang, Wenchao
2018-01-01
The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging. PMID:29494540
EnviroAtlas -Tampa, FL- One Meter Resolution Urban Land Cover (2010)
The EnviroAtlas Tampa, FL land cover map was generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near infrared) aerial photography from April-May 2010 at 1 m spatial resolution. Eight land cover classes were mapped: impervious surface, soil and barren, grass and herbaceous, trees and forest, water, agriculture, woody wetland, and emergent wetland. The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Tampa, and includes the cities of Clearwater and St. Petersburg, as well as additional out-lying areas. An accuracy assessment using a stratified random sampling of 600 samples (100 per class) yielded an overall accuracy of 70.67 percent and an area weighted accuracy of 81.87 percent using a minimum mapping unit of 9 pixels (3x3 pixel window). This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Gai, Neville D; Malayeri, Ashkan A; Bluemke, David A
2017-04-01
To develop and assess a new technique for three-dimensional (3D) full lung T1 and T2* mapping using a single free breathing scan during a clinically feasible time. A 3D stack of dual-echo ultrashort echo time (UTE) radial acquisition interleaved with and without a WET (water suppression enhanced through T1 effects) saturation pulse was used to map T1 and T2* simultaneously in a single scan. Correction for modulation due to multiple views per segment was derived. Bloch simulations were performed to study saturation pulse excitation profile on lung tissue. Optimization of the saturation delay time (for T1 mapping) and echo time (for T2* mapping) was performed. Monte Carlo simulation was done to predict accuracy and precision of the sequence with signal-to-noise ratio of in vivo images used in the simulation. A phantom study was carried out using the 3D interleaved saturation recovery with dual echo ultrashort echo time imaging (ITSR-DUTE) sequence and reference standard inversion recovery spin echo sequence (IR-SE) to compare accuracy of the sequence. Nine healthy volunteers were imaged and mean (SD) of T1 and T2* in lung parenchyma at 3T were estimated through manually assisted segmentation. 3D lung coverage with a resolution of 2.5 × 2.5 × 6 mm 3 was performed and nominal scan time was recorded for the scans. Repeatability was assessed in three of the volunteers. Regional differences in T1/T2* values were also assessed. The phantom study showed accuracy of T1 values to be within 2.3% of values obtained from IR-SE. Mean T1 value in lung parenchyma was 1002 ± 82 ms while T2* was 0.85 ± 0.1 ms. Scan time was ∼10 min for volunteer scans. Mean coefficient of variation (CV) across slices was 0.057 and 0.09, respectively. Regional variation along the gravitational direction and between right and left lung were not significant (P = 0.25 and P = 0.06, respectively) for T1. T2* showed significant variation (P = 0.03) along the gravitational direction. Repeatability for three volunteers was within 0.7% for T1 and 1.9% for T2*. 3D T1 and T2* maps of the entire lung can be obtained in a single scan of ∼10 min with a resolution of 2.5 × 2.5 × 6 mm 3 . 2 J. Magn. Reson. Imaging 2017;45:1097-1104. 2016 International Society for Magnetic Resonance in Medicine.
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
NASA Technical Reports Server (NTRS)
Tom, C.; Miller, L. D.; Christenson, J. W.
1978-01-01
A landscape model was constructed with 34 land-use, physiographic, socioeconomic, and transportation maps. A simple Markov land-use trend model was constructed from observed rates of change and nonchange from photointerpreted 1963 and 1970 airphotos. Seven multivariate land-use projection models predicting 1970 spatial land-use changes achieved accuracies from 42 to 57 percent. A final modeling strategy was designed, which combines both Markov trend and multivariate spatial projection processes. Landsat-1 image preprocessing included geometric rectification/resampling, spectral-band, and band/insolation ratioing operations. A new, systematic grid-sampled point training-set approach proved to be useful when tested on the four orginal MSS bands, ten image bands and ratios, and all 48 image and map variables (less land use). Ten variable accuracy was raised over 15 percentage points from 38.4 to 53.9 percent, with the use of the 31 ancillary variables. A land-use classification map was produced with an optimal ten-channel subset of four image bands and six ancillary map variables. Point-by-point verification of 331,776 points against a 1972/1973 U.S. Geological Survey (UGSG) land-use map prepared with airphotos and the same classification scheme showed average first-, second-, and third-order accuracies of 76.3, 58.4, and 33.0 percent, respectively.
Scoping of Flood Hazard Mapping Needs for Belknap County, New Hampshire
2006-01-01
DEM Digital Elevation Model DFIRM Digital Flood Insurance Rate Map DOQ Digital Orthophoto Quadrangle DOQQ Digital Ortho Quarter Quadrangle DTM...Agriculture Imag- ery Program (NAIP) color Digital Orthophoto Quadrangles (DOQs)). Remote sensing, base map information, GIS data (for example, contour data...found on USGS topographic maps. More recently developed data were derived from digital orthophotos providing improved base map accuracy. NH GRANIT is
cudaMap: a GPU accelerated program for gene expression connectivity mapping
2013-01-01
Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap. PMID:24112435
NASA Technical Reports Server (NTRS)
Eppler, Dean B.; Bleacher, Jacob F.; Evans, Cynthia A.; Feng, Wanda; Gruener, John; Hurwitz, Debra M.; Skinner, J. A., Jr.; Whitson, Peggy; Janoiko, Barbara
2013-01-01
Geologic maps integrate the distributions, contacts, and compositions of rock and sediment bodies as a means to interpret local to regional formative histories. Applying terrestrial mapping techniques to other planets is challenging because data is collected primarily by orbiting instruments, with infrequent, spatiallylimited in situ human and robotic exploration. Although geologic maps developed using remote data sets and limited "Apollo-style" field access likely contain inaccuracies, the magnitude, type, and occurrence of these are only marginally understood. This project evaluates the interpretative and cartographic accuracy of both field- and remote-based mapping approaches by comparing two 1:24,000 scale geologic maps of the San Francisco Volcanic Field (SFVF), north-central Arizona. The first map is based on traditional field mapping techniques, while the second is based on remote data sets, augmented with limited field observations collected during NASA Desert Research & Technology Studies (RATS) 2010 exercises. The RATS mission used Apollo-style methods not only for pre-mission traverse planning but also to conduct geologic sampling as part of science operation tests. Cross-comparison demonstrates that the Apollo-style map identifies many of the same rock units and determines a similar broad history as the field-based map. However, field mapping techniques allow markedly improved discrimination of map units, particularly unconsolidated surficial deposits, and recognize a more complex eruptive history than was possible using Apollo-style data. Further, the distribution of unconsolidated surface units was more obvious in the remote sensing data to the field team after conducting the fieldwork. The study raises questions about the most effective approach to balancing mission costs with the rate of knowledge capture, suggesting that there is an inflection point in the "knowledge capture curve" beyond which additional resource investment yields progressively smaller gains in geologic knowledge.
Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control
NASA Astrophysics Data System (ADS)
Cledat, E.; Cucci, D. A.
2017-08-01
The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
NASA Astrophysics Data System (ADS)
Beaumont, Benjamin; Grippa, Tais; Lennert, Moritz; Vanhuysse, Sabine; Stephenne, Nathalie; Wolff, Eléonore
2017-07-01
Encouraged by the EU INSPIRE directive requirements and recommendations, the Walloon authorities, similar to other EU regional or national authorities, want to develop operational land-cover (LC) and land-use (LU) mapping methods using existing geodata. Urban planners and environmental monitoring stakeholders of Wallonia have to rely on outdated, mixed, and incomplete LC and LU information. The current reference map is 10-years old. The two object-based classification methods, i.e., a rule- and a classifier-based method, for detailed regional urban LC mapping are compared. The added value of using the different existing geospatial datasets in the process is assessed. This includes the comparison between satellite and aerial optical data in terms of mapping accuracies, visual quality of the map, costs, processing, data availability, and property rights. The combination of spectral, tridimensional, and vector data provides accuracy values close to 0.90 for mapping the LC into nine categories with a minimum mapping unit of 15 m2. Such a detailed LC map offers opportunities for fine-scale environmental and spatial planning activities. Still, the regional application poses challenges regarding automation, big data handling, and processing time, which are discussed.
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-08-14
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
Rudolph, Abby E; Bazzi, Angela Robertson; Fish, Sue
2016-10-01
Analyses with geographic data can be used to identify "hot spots" and "health service deserts", examine associations between proximity to services and their use, and link contextual factors with individual-level data to better understand how environmental factors influence behaviors. Technological advancements in methods for collecting this information can improve the accuracy of contextually-relevant information; however, they have outpaced the development of ethical standards and guidance, particularly for research involving populations engaging in illicit/stigmatized behaviors. Thematic analysis identified ethical considerations for collecting geographic data using different methods and the extent to which these concerns could influence study compliance and data validity. In-depth interviews with 15 Baltimore residents (6 recruited via flyers and 9 via peer-referral) reporting recent drug use explored comfort with and ethics of three methods for collecting geographic information: (1) surveys collecting self-reported addresses/cross-streets, (2) surveys using web-based maps to find/confirm locations, and (3) geographical momentary assessments (GMA), which collect spatiotemporally referenced behavioral data. Survey methods for collecting geographic data (i.e., addresses/cross-streets and web-based maps) were generally acceptable; however, participants raised confidentiality concerns regarding exact addresses for illicit/stigmatized behaviors. Concerns specific to GMA included burden of carrying/safeguarding phones and responding to survey prompts, confidentiality, discomfort with being tracked, and noncompliance with study procedures. Overall, many felt that confidentiality concerns could influence the accuracy of location information collected for sensitive behaviors and study compliance. Concerns raised by participants could result in differential study participation and/or study compliance and questionable accuracy/validity of location data for sensitive behaviors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large scale Wyoming transportation data: a resource planning tool
O'Donnell, Michael S.; Fancher, Tammy S.; Freeman, Aaron T.; Ziegler, Abra E.; Bowen, Zachary H.; Aldridge, Cameron L.
2014-01-01
The U.S. Geological Survey Fort Collins Science Center created statewide roads data for the Bureau of Land Management Wyoming State Office using 2009 aerial photography from the National Agriculture Imagery Program. The updated roads data resolves known concerns of omission, commission, and inconsistent representation of map scale, attribution, and ground reference dates which were present in the original source data. To ensure a systematic and repeatable approach of capturing roads on the landscape using on-screen digitizing from true color National Agriculture Imagery Program imagery, we developed a photogrammetry key and quality assurance/quality control protocols. Therefore, the updated statewide roads data will support the Bureau of Land Management’s resource management requirements with a standardized map product representing 2009 ground conditions. The updated Geographic Information System roads data set product, represented at 1:4,000 and +/- 10 meters spatial accuracy, contains 425,275 kilometers within eight attribute classes. The quality control of these products indicated a 97.7 percent accuracy of aspatial information and 98.0 percent accuracy of spatial locations. Approximately 48 percent of the updated roads data was corrected for spatial errors of greater than 1 meter relative to the pre-existing road data. Twenty-six percent of the updated roads involved correcting spatial errors of greater than 5 meters and 17 percent of the updated roads involved correcting spatial errors of greater than 9 meters. The Bureau of Land Management, other land managers, and researchers can use these new statewide roads data set products to support important studies and management decisions regarding land use changes, transportation and planning needs, transportation safety, wildlife applications, and other studies.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
Selkowitz, D.J.
2010-01-01
Shrub cover appears to be increasing across many areas of the Arctic tundra biome, and increasing shrub cover in the Arctic has the potential to significantly impact global carbon budgets and the global climate system. For most of the Arctic, however, there is no existing baseline inventory of shrub canopy cover, as existing maps of Arctic vegetation provide little information about the density of shrub cover at a moderate spatial resolution across the region. Remotely-sensed fractional shrub canopy maps can provide this necessary baseline inventory of shrub cover. In this study, we compare the accuracy of fractional shrub canopy (> 0.5 m tall) maps derived from multi-spectral, multi-angular, and multi-temporal datasets from Landsat imagery at 30 m spatial resolution, Moderate Resolution Imaging SpectroRadiometer (MODIS) imagery at 250 m and 500 m spatial resolution, and MultiAngle Imaging Spectroradiometer (MISR) imagery at 275 m spatial resolution for a 1067 km2 study area in Arctic Alaska. The study area is centered at 69 ??N, ranges in elevation from 130 to 770 m, is composed primarily of rolling topography with gentle slopes less than 10??, and is free of glaciers and perennial snow cover. Shrubs > 0.5 m in height cover 2.9% of the study area and are primarily confined to patches associated with specific landscape features. Reference fractional shrub canopy is determined from in situ shrub canopy measurements and a high spatial resolution IKONOS image swath. Regression tree models are constructed to estimate fractional canopy cover at 250 m using different combinations of input data from Landsat, MODIS, and MISR. Results indicate that multi-spectral data provide substantially more accurate estimates of fractional shrub canopy cover than multi-angular or multi-temporal data. Higher spatial resolution datasets also provide more accurate estimates of fractional shrub canopy cover (aggregated to moderate spatial resolutions) than lower spatial resolution datasets, an expected result for a study area where most shrub cover is concentrated in narrow patches associated with rivers, drainages, and slopes. Including the middle infrared bands available from Landsat and MODIS in the regression tree models (in addition to the four standard visible and near-infrared spectral bands) typically results in a slight boost in accuracy. Including the multi-angular red band data available from MISR in the regression tree models, however, typically boosts accuracy more substantially, resulting in moderate resolution fractional shrub canopy estimates approaching the accuracy of estimates derived from the much higher spatial resolution Landsat sensor. Given the poor availability of snow and cloud-free Landsat scenes in many areas of the Arctic and the promising results demonstrated here by the MISR sensor, MISR may be the best choice for large area fractional shrub canopy mapping in the Alaskan Arctic for the period 2000-2009.
Evaluation of freely available ancillary data used for detailed soil mapping in Brazil
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Anjos, Lúcia; Vasques, Gustavo; Heuvelink, Gerard
2014-05-01
Brazil is one of the world's largest food producers, and is home of both largest rainforest and largest supply of renewable fresh water on Earth. However, it lacks detailed soil information in extensive areas of the country. The best soil map covering the entire country was published at a scale of 1:5,000,000. Termination of governmental support for systematic soil mapping in the 1980's made detailed soil mapping of the whole country a very difficult task to accomplish. Nowadays, due to new user-driven demands (e.g. precision agriculture), most detailed soil maps are produced for small size areas. Many of them rely on as is freely available ancillary data, although their accuracy is usually not reported or unknown. Results from a validation exercise that we performed using ground control points from a small hilly catchment (20 km²) in Southern Brazil (-53.7995ºE, -29.6355ºN) indicate that most freely available ancillary data needs some type of correction before use. Georeferenced and orthorectified RapidEye imagery (recently acquired by the Brazilian government) has a horizontal accuracy (root-mean-square error, RMSE) of 37 m, which is worse than the value published in the metadata (32 m). Like any remote sensing imagery, RapidEye imagery needs to be correctly registered before its use for soil mapping. Topographic maps produced by the Brazilian Army and derived geological maps (scale of 1:25,000) have a horizontal accuracy of 65 m, which is more than four times the maximum value allowed by Brazilian legislation (15 m). Worse results were found for geological maps derived from 1:50,000 topographic maps (RMSE = 147 m), for which the maximum allowed value is 30 m. In most cases positional errors are of systematic origin and can be easily corrected (e.g., affine transformation). ASTER GDEM has many holes and is very noisy, making it of little use in the studied area. TOPODATA, which is SRTM kriged from originally 3 to 1 arc-second by the Brazilian National Institute for Space Research, has a vertical accuracy of 19 m and is strongly affected by double-oblique stripes which were intensified by kriging. Many spurious sinks were created which are not easily corrected using either frequency filters or sink-filling algorithms. The exceptions are SRTM v4.1, which is the most vertically accurate DEM available (RMSE = 18.7 m), and Google Earth imagery compiled from various sources (positional accuracy of RMSE = 8 m). It is likely that most mapping efforts will continue to be employed in small size areas to fulfill local user-driven demands in the forthcoming years. Also, many new techniques and technologies will possibly be developed and employed for soil mapping. However, employing better quality ancillary data still is a challenge to be overcome to produce high-quality soil information to allow better decision making and land use policy in Brazil.
Accuracy of lineaments mapping from space
NASA Technical Reports Server (NTRS)
Short, Nicholas M.
1989-01-01
The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.
Sanchez, Richard D.
2004-01-01
High-resolution airborne digital cameras with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) technology may offer a real-time means to gather accurate topographic map information by reducing ground control and eliminating aerial triangulation. Past evaluations of this integrated system over relatively flat terrain have proven successful. The author uses Emerge Digital Sensor System (DSS) combined with Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing to examine the positional mapping accuracy in rough terrain. The positional accuracy documented in this study did not meet large-scale mapping requirements owing to an apparent system mechanical failure. Nonetheless, the findings yield important information on a new approach for mapping in Antarctica and other remote or inaccessible areas of the world.
Estimating discharge in rivers using remotely sensed hydraulic information
Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.
2005-01-01
A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.
Isotropic three-dimensional T2 mapping of knee cartilage: Development and validation.
Colotti, Roberto; Omoumi, Patrick; Bonanno, Gabriele; Ledoux, Jean-Baptiste; van Heeswijk, Ruud B
2018-02-01
1) To implement a higher-resolution isotropic 3D T 2 mapping technique that uses sequential T 2 -prepared segmented gradient-recalled echo (Iso3DGRE) images for knee cartilage evaluation, and 2) to validate it both in vitro and in vivo in healthy volunteers and patients with knee osteoarthritis. The Iso3DGRE sequence with an isotropic 0.6 mm spatial resolution was developed on a clinical 3T MR scanner. Numerical simulations were performed to optimize the pulse sequence parameters. A phantom study was performed to validate the T 2 estimation accuracy. The repeatability of the sequence was assessed in healthy volunteers (n = 7). T 2 values were compared with those from a clinical standard 2D multislice multiecho (MSME) T 2 mapping sequence in knees of healthy volunteers (n = 13) and in patients with knee osteoarthritis (OA, n = 5). The numerical simulations resulted in 100 excitations per segment and an optimal radiofrequency (RF) excitation angle of 15°. The phantom study demonstrated a good correlation of the technique with the reference standard (slope 0.9 ± 0.05, intercept 0.2 ± 1.7 msec, R 2 ≥ 0.99). Repeated measurements of cartilage T 2 values in healthy volunteers showed a coefficient of variation of 5.6%. Both Iso3DGRE and MSME techniques found significantly higher cartilage T 2 values (P < 0.03) in OA patients. Iso3DGRE precision was equal to that of the MSME T 2 mapping in healthy volunteers, and significantly higher in OA (P = 0.01). This study successfully demonstrated that high-resolution isotropic 3D T 2 mapping for knee cartilage characterization is feasible, accurate, repeatable, and precise. The technique allows for multiplanar reformatting and thus T 2 quantification in any plane of interest. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:362-371. © 2017 International Society for Magnetic Resonance in Medicine.
Attenuation correction strategies for multi-energy photon emitters using SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pretorius, P.H.; King, M.A.; Pan, T.S.
1996-12-31
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
Variance approximations for assessments of classification accuracy
R. L. Czaplewski
1994-01-01
Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...
Li, Hongyi; Shi, Zhou; Sha, Jinming; Cheng, Jieliang
2006-08-01
In the present study, vegetation, soil brightness, and moisture indices were extracted from Landsat ETM remote sensing image, heat indices were extracted from MODIS land surface temperature product, and climate index and other auxiliary geographical information were selected as the input of neural network. The remote sensing eco-environmental background value of standard interest region evaluated in situ was selected as the output of neural network, and the back propagation (BP) neural network prediction model containing three layers was designed. The network was trained, and the remote sensing eco-environmental background value of Fuzhou in China was predicted by using software MATLAB. The class mapping of remote sensing eco-environmental background values based on evaluation standard showed that the total classification accuracy was 87. 8%. The method with a scheme of prediction first and classification then could provide acceptable results in accord with the regional eco-environment types.
Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian
2015-01-01
The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101
Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization
NASA Astrophysics Data System (ADS)
Kowalczyk, L.; Goszczynska, H.; Zalewska, E.; Bajera, A.; Krolicki, L.
2014-04-01
This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s) missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.
NASA Astrophysics Data System (ADS)
Feygels, Viktor I.; Park, Joong Yong; Wozencraft, Jennifer; Aitken, Jennifer; Macon, Christopher; Mathur, Abhinav; Payment, Andy; Ramnath, Vinod
2013-06-01
CZMIL is an integrated lidar-imagery system and software suite designed for highly automated generation of physical and environmental information products for coastal zone mapping in the framework of the US Army Corps of Engineers (USACE) National Coastal Mapping Program (NCMP). This paper presents the results of CZMIL system validation in turbid water conditions along the Gulf Coast of Mississippi and in relatively clear water conditions in Florida in late spring 2012. Results of the USACE May-October 2012 mission in Green Bay, WI and Lake Erie are presented. The system performance tests show that CZMIL successfully achieved 7-8m depth in Mississippi with Kd =0.46m-1 (Kd is the diffuse attenuation coefficient) and up to 41m in Florida when Kd=0.11m-1. Bathymetric accuracy of CZMIL was measured by comparing CZMIL depths with multi-beam sonar data from Cat Island, MS and from off the coast of Fort. Lauderdale, FL. Validation demonstrated that CZMIL meets USACE specifications (two standard deviation, 2σ, ~30 cm). To measure topographic accuracy we made direct comparisons of CZMIL elevations to GPS-surveyed ground control points and vehicle-based lidar scans of topographic surfaces. Results confirmed that CZMIL meets the USACE topographic requirements (2σ, ~15 cm). Upon completion of the Green Bay and Lake Erie mission there were 89 flights with 2231 flightlines. The general hours of aircraft engine time (which doesn't include all transit/ferry flights) was 441 hours with 173 hours of time on survey flightlines. The 4.8 billion (!) laser shots and 38.6 billion digitized waveforms covered over 1025 miles of shoreline.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
Precise Ortho Imagery as the Source for Authoritative Airport Mapping
NASA Astrophysics Data System (ADS)
Howard, H.; Hummel, P.
2016-06-01
As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration) and ICAO (International Civil Aviation Organization) require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB) critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.
A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions
Chacon, L.; Chen, G.
2016-04-19
Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less
A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.
2016-07-01
We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.
A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, L.; Chen, G.
Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less
Adaptive Associative Scale-Free Maps for Fusing Human and Robotic Intelligences
2006-06-01
negative documents are left out from the series . . . . . . . . 50 7.5 Sample runs Blue: Accuracy, red: Recall, black: the ratio of new docu- ments to all...The true negative documents are left out from the series . . . . . . . . 51 7.6 Sample runs Blue: Accuracy, red: Recall, black: the ratio of new docu...right corner. The true negative documents are left out from the series . . . . . . . . 52 7.7 Sample map 1 from topic ‘Language
Kato, Takahisa; Okumura, Ichiro; Kose, Hidekazu; Takagi, Kiyoshi; Hata, Nobuhiko
2016-04-01
The hysteresis operation is an outstanding issue in tendon-driven actuation--which is used in robot-assisted surgery--as it is incompatible with kinematic mapping for control and trajectory planning. Here, a new tendon-driven continuum robot, designed to fit existing neuroendoscopes, is presented with kinematic mapping for hysteresis operation. With attention to tension in tendons as a salient factor of the hysteresis operation, extended forward kinematic mapping (FKM) has been developed. In the experiment, the significance of every component in the robot for the hysteresis operation has been investigated. Moreover, the prediction accuracy of postures by the extended FKM has been determined experimentally and compared with piecewise constant curvature assumption. The tendons were the most predominant factor affecting the hysteresis operation of the robot. The extended FKM including friction in tendons predicted the postures in the hysteresis operation with improved accuracy (2.89 and 3.87 mm for the single and the antagonistic-tendons layouts, respectively). The measured accuracy was within the target value of 5 mm for planning of neuroendoscopic resection of intraventricle tumors. The friction in tendons was the most predominant factor for the hysteresis operation in the robot. The extended FKM including this factor can improve prediction accuracy of the postures in the hysteresis operation. The trajectory of the new robot can be planned within target value for the neuroendoscopic procedure by using the extended FKM.
NASA Astrophysics Data System (ADS)
Wilschut, L. I.; Addink, E. A.; Heesterbeek, J. A. P.; Dubyanskiy, V. M.; Davis, S. A.; Laudisoit, A.; Begon, M.; Burdelov, L. A.; Atshabar, B. B.; de Jong, S. M.
2013-08-01
Plague is a zoonotic infectious disease present in great gerbil populations in Kazakhstan. Infectious disease dynamics are influenced by the spatial distribution of the carriers (hosts) of the disease. The great gerbil, the main host in our study area, lives in burrows, which can be recognized on high resolution satellite imagery. In this study, using earth observation data at various spatial scales, we map the spatial distribution of burrows in a semi-desert landscape. The study area consists of various landscape types. To evaluate whether identification of burrows by classification is possible in these landscape types, the study area was subdivided into eight landscape units, on the basis of Landsat 7 ETM+ derived Tasselled Cap Greenness and Brightness, and SRTM derived standard deviation in elevation. In the field, 904 burrows were mapped. Using two segmented 2.5 m resolution SPOT-5 XS satellite scenes, reference object sets were created. Random Forests were built for both SPOT scenes and used to classify the images. Additionally, a stratified classification was carried out, by building separate Random Forests per landscape unit. Burrows were successfully classified in all landscape units. In the ‘steppe on floodplain’ areas, classification worked best: producer's and user's accuracy in those areas reached 88% and 100%, respectively. In the ‘floodplain’ areas with a more heterogeneous vegetation cover, classification worked least well; there, accuracies were 86 and 58% respectively. Stratified classification improved the results in all landscape units where comparison was possible (four), increasing kappa coefficients by 13, 10, 9 and 1%, respectively. In this study, an innovative stratification method using high- and medium resolution imagery was applied in order to map host distribution on a large spatial scale. The burrow maps we developed will help to detect changes in the distribution of great gerbil populations and, moreover, serve as a unique empirical data set which can be used as input for epidemiological plague models. This is an important step in understanding the dynamics of plague.
Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C
2012-07-01
Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of significant associations for SNPs across the whole MAF spectrum. Because the 1KG Project is still under way, we expect that later versions will provide better imputation performance. © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
NASA Astrophysics Data System (ADS)
Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan
2017-10-01
This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.
Land use mapping and modelling for the Phoenix Quadrangle
NASA Technical Reports Server (NTRS)
Place, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The land use of the Phoenix Quadrangle in Arizona had been mapped previously from aerial photographs and recorded in a computer data bank. During the ERTS-1 experiment, changes in land use were detected using only the ERTS-1 images. The I2S color additive viewer was used as the principal image enhancement tool, operated in a multispectral mode. Hard copy color composite images of the best multiband combinations from ERTS-1 were made by photographic and diazo processes. The I2S viewer was also used to enhance changes between successive images by quick flip techniques or by registering with different color filters. More recently, a Bausch and Lomb zoom transferscope has been used for the same purpose. Improved interpretation of land use change resulted, and a map of changes within the Phoenix Quadrangle was compiled. The first level of a proposed standard land use classification system was sucessfully used. ERTS-1 underflight photography was used to check the accuracy of the ERTS-1 image interpretation. It was found that the total areas of change detected in the photos were comparable with the total areas of change detected in the ERTS-1 images.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
,; ,; ,; ,
1989-01-01
This gazetteer lists antarctic names approved by the United States Board on Geographic Names and by the Secretary of the Interior. The Board is the interagency body created by law to standardize and promulgate geographic names for official purposes. As the official standard for names in Antarctica, the gazetteer assures accuracy and uniformity for the specialist and the general user alike. Unlike the last (1981) edition, now out of print, the book contains neither historical notes nor textual descriptions of features. The gazetteer contains names of features in Antarctica and the area extending northward to the Antarctic Convergence that have been approved by the Board as recently as mid-1989. It supersedes previous Board gazetteers for the area. For each geographic feature, the book contains the name, cross references if any, and latitude and longitude. Coverage corresponds to that of maps at the scale of 1:250,000 or larger for islands, coastal Antarctica, and mountains and ranges of the continent. Much of the interior of Antarctica, an ice plateau, has been mapped at a smaller scale and is nearly devoid of features and toponyms. All of the names are for natural features; scientific stations are not listed. For the names of submarine features, reference should be made to the Gazetteer of Undersea Features, U.S. Board on Geographic Names (1981).
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
SnoMAP: Pioneering the Path for Clinical Coding to Improve Patient Care.
Lawley, Michael; Truran, Donna; Hansen, David; Good, Norm; Staib, Andrew; Sullivan, Clair
2017-01-01
The increasing demand for healthcare and the static resources available necessitate data driven improvements in healthcare at large scale. The SnoMAP tool was rapidly developed to provide an automated solution that transforms and maps clinician-entered data to provide data which is fit for both administrative and clinical purposes. Accuracy of data mapping was maintained.
NASA Astrophysics Data System (ADS)
Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao
2017-04-01
Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the use of monthly NDVI time series imagery. These results show the importance of considering the phenological stage for image selection for mapping S. alterniflora using GF-1 WFV imagery. Furthermore, in light of the better tradeoff between the number of images and classification accuracy when using multitemporal GF-1 WFV imagery, we suggest using multitemporal imagery acquired at appropriate phenological windows for S. alterniflora mapping at regional scales.
Mathieu, Renaud; Aryal, Jagannath; Chong, Albert K
2007-11-20
Effective assessment of biodiversity in cities requires detailed vegetation maps.To date, most remote sensing of urban vegetation has focused on thematically coarse landcover products. Detailed habitat maps are created by manual interpretation of aerialphotographs, but this is time consuming and costly at large scale. To address this issue, wetested the effectiveness of object-based classifications that use automated imagesegmentation to extract meaningful ground features from imagery. We applied thesetechniques to very high resolution multispectral Ikonos images to produce vegetationcommunity maps in Dunedin City, New Zealand. An Ikonos image was orthorectified and amulti-scale segmentation algorithm used to produce a hierarchical network of image objects.The upper level included four coarse strata: industrial/commercial (commercial buildings),residential (houses and backyard private gardens), vegetation (vegetation patches larger than0.8/1ha), and water. We focused on the vegetation stratum that was segmented at moredetailed level to extract and classify fifteen classes of vegetation communities. The firstclassification yielded a moderate overall classification accuracy (64%, κ = 0.52), which ledus to consider a simplified classification with ten vegetation classes. The overallclassification accuracy from the simplified classification was 77% with a κ value close tothe excellent range (κ = 0.74). These results compared favourably with similar studies inother environments. We conclude that this approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes. It is an efficient way to generate accurate and detailed maps in significantly shorter time. The final map accuracy could be improved by integrating segmentation, automated and manual classification in the mapping process, especially when considering important vegetation classes with limited spectral contrast.
Evaluation of airborne image data for mapping riparian vegetation within the Grand Canyon
Davis, Philip A.; Staid, Matthew I.; Plescia, Jeffrey B.; Johnson, Jeffrey R.
2002-01-01
This study examined various types of remote-sensing data that have been acquired during a 12-month period over a portion of the Colorado River corridor to determine the type of data and conditions for data acquisition that provide the optimum classification results for mapping riparian vegetation. Issues related to vegetation mapping included time of year, number and positions of wavelength bands, and spatial resolution for data acquisition to produce accurate vegetation maps versus cost of data. Image data considered in the study consisted of scanned color-infrared (CIR) film, digital CIR, and digital multispectral data, whose resolutions from 11 cm (photographic film) to 100 cm (multispectral), that were acquired during the Spring, Summer, and Fall seasons in 2000 for five long-term monitoring sites containing riparian vegetation. Results show that digitally acquired data produce higher and more consistent classification accuracies for mapping vegetation units than do film products. The highest accuracies were obtained from nine-band multispectral data; however, a four-band subset of these data, that did not include short-wave infrared bands, produced comparable mapping results. The four-band subset consisted of the wavelength bands 0.52-0.59 µm, 0.59-0.62 µm, 0.67-0.72 µm, and 0.73-0.85 µm. Use of only three of these bands that simulate digital CIR sensors produced accuracies for several vegetation units that were 10% lower than those obtained using the full multispectral data set. Classification tests using band ratios produced lower accuracies than those using band reflectance for scanned film data; a result attributed to the relatively poor radiometric fidelity maintained by the film scanning process, whereas calibrated multispectral data produced similar classification accuracies using band reflectance and band ratios. This suggests that the intrinsic band reflectance of the vegetation is more important than inter-band reflectance differences in attaining high mapping accuracies. These results also indicate that radiometrically calibrated sensors that record a wide range of radiance produce superior results and that such sensors should be used for monitoring purposes. When texture (spatial variance) at near-infrared wavelength is combined with spectral data in classification, accuracy increased most markedly (20-30%) for the highest resolution (11-cm) CIR film data, but decreased in its effect on accuracy in lower-resolution multi-spectral image data; a result observed in previous studies (Franklin and McDermid 1993, Franklin et al. 2000, 2001). While many classification unit accuracies obtained from the 11-cm film CIR band with texture data were in fact higher than those produced using the 100-cm, nine-band multispectral data with texture, the 11-cm film CIR data produced much lower accuracies than the 100-cm multispectral data for the more sparsely populated vegetation units due to saturation of picture elements during the film scanning process in vegetation units with a high proportion of alluvium. Overall classification accuracies obtained from spectral band and texture data range from 36% to 78% for all databases considered, from 57% to 71% for the 11-cm film CIR data, and from 54% to 78% for the 100-cm multispectral data. Classification results obtained from 20-cm film CIR band and texture data, which were produced by applying a Gaussian filter to the 11-cm film CIR data, showed increases in accuracy due to texture that were similar to those observed using the original 11-cm film CIR data. This suggests that data can be collected at the lower resolution and still retain the added power of vegetation texture. Classification accuracies for the riparian vegetation units examined in this study do not appear to be influenced by season of data acquisition, although data acquired under direct sunlight produced higher overall accuracies than data acquired under overcast conditions. The latter observation, in addition to the importance of band reflectance for classification, implies that data should be acquired near summer solstice when sun elevation and reflectance is highest and when shadows cast by steep canyon walls are minimized.
CNN universal machine as classificaton platform: an art-like clustering algorithm.
Bálya, David
2003-12-01
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
Magnetic space-based field measurements
NASA Technical Reports Server (NTRS)
Langel, R. A.
1981-01-01
Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.
Unmixing AVHRR Imagery to Assess Clearcuts and Forest Regrowth in Oregon
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Spanner, Michael A.
1995-01-01
Advanced Very High Resolution Radiometer imagery provides frequent and low-cost coverage of the earth, but its coarse spatial resolution (approx. 1.1 km by 1.1 km) does not lend itself to standard techniques of automated categorization of land cover classes because the pixels are generally mixed; that is, the extent of the pixel includes several land use/cover classes. Unmixing procedures were developed to extract land use/cover class signatures from mixed pixels, using Landsat Thematic Mapper data as a source for the training set, and to estimate fractions of class coverage within pixels. Application of these unmixing procedures to mapping forest clearcuts and regrowth in Oregon indicated that unmixing is a promising approach for mapping major trends in land cover with AVHRR bands 1 and 2. Including thermal bands by unmixing AVHRR bands 1-4 did not lead to significant improvements in accuracy, but experiments with unmixing these four bands did indicate that use of weighted least squares techniques might lead to improvements in other applications of unmixing.
Shen, Dayong; Liu, Yuling; Huang, Shengli
2012-01-01
The estimation of ice/snow accumulation is of great significance in quantifying the mass balance of ice sheets and variation in water resources. Improving the accuracy and reducing uncertainty has been a challenge for the estimation of annual accumulation over the Greenland ice sheet. In this study, we kriged and analyzed the spatial pattern of accumulation based on an observation data series including 315 points used in a recent research, plus 101 ice cores and snow pits and newly compiled 23 coastal weather station data. The estimated annual accumulation over the Greenland ice sheet is 31.2 g cm−2 yr−1, with a standard error of 0.9 g cm−2 yr−1. The main differences between the improved map developed in this study and the recently published accumulation maps are in the coastal areas, especially southeast and southwest regions. The analysis of accumulations versus elevation reveals the distribution patterns of accumulation over the Greenland ice sheet.
NASA Astrophysics Data System (ADS)
Melville, Bethany; Lucieer, Arko; Aryal, Jagannath
2018-04-01
This paper presents a random forest classification approach for identifying and mapping three types of lowland native grassland communities found in the Tasmanian Midlands region. Due to the high conservation priority assigned to these communities, there has been an increasing need to identify appropriate datasets that can be used to derive accurate and frequently updateable maps of community extent. Therefore, this paper proposes a method employing repeat classification and statistical significance testing as a means of identifying the most appropriate dataset for mapping these communities. Two datasets were acquired and analysed; a Landsat ETM+ scene, and a WorldView-2 scene, both from 2010. Training and validation data were randomly subset using a k-fold (k = 50) approach from a pre-existing field dataset. Poa labillardierei, Themeda triandra and lowland native grassland complex communities were identified in addition to dry woodland and agriculture. For each subset of randomly allocated points, a random forest model was trained based on each dataset, and then used to classify the corresponding imagery. Validation was performed using the reciprocal points from the independent subset that had not been used to train the model. Final training and classification accuracies were reported as per class means for each satellite dataset. Analysis of Variance (ANOVA) was undertaken to determine whether classification accuracy differed between the two datasets, as well as between classifications. Results showed mean class accuracies between 54% and 87%. Class accuracy only differed significantly between datasets for the dry woodland and Themeda grassland classes, with the WorldView-2 dataset showing higher mean classification accuracies. The results of this study indicate that remote sensing is a viable method for the identification of lowland native grassland communities in the Tasmanian Midlands, and that repeat classification and statistical significant testing can be used to identify optimal datasets for vegetation community mapping.
Approach for Improving the Integrated Sensor Orientation
NASA Astrophysics Data System (ADS)
Mitishita, E.; Ercolin Filho, L.; Graça, N.; Centeno, J.
2016-06-01
The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the in situ camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AVTM system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.
Construct Maps as a Foundation for Standard Setting
ERIC Educational Resources Information Center
Wyse, Adam E.
2013-01-01
Construct maps are tools that display how the underlying achievement construct upon which one is trying to set cut-scores is related to other information used in the process of standard setting. This article reviews what construct maps are, uses construct maps to provide a conceptual framework to view commonly used standard-setting procedures (the…
Quantifying the tibiofemoral joint space using x-ray tomosynthesis.
Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly
2011-12-01
Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
Forest cover type analysis of New England forests using innovative WorldView-2 imagery
NASA Astrophysics Data System (ADS)
Kovacs, Jenna M.
For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.
An IDL-based analysis package for COBE and other skycube-formatted astronomical data
NASA Technical Reports Server (NTRS)
Ewing, J. A.; Isaacman, Richard B.; Gales, J. M.
1992-01-01
UIMAGE is a data analysis package written in IDL for the Cosmic Background Explorer (COBE) project. COBE has extraordinarily stringent accuracy requirements: 1 percent mid-infrared absolute photometry, 0.01 percent submillimeter absolute spectrometry, and 0.0001 percent submillimeter relative photometry. Thus, many of the transformations and image enhancements common to analysis of large data sets must be done with special care. UIMAGE is unusual in this sense in that it performs as many of its operations as possible on the data in its native format and projection, which in the case of COBE is the quadrilateralized sphereical cube ('skycube'). That is, after reprojecting the data, e.g., onto an Aitoff map, the user who performs an operation such as taking a crosscut or extracting data from a pixel is transparently acting upon the skycube data from which the projection was made, thereby preserving the accuracy of the result. Current plans call for formatting external data bases such as CO maps into the skycube format with a high-accuracy transformation, thereby allowing Guest Investigators to use UIMAGE for direct comparison of the COBE maps with those at other wavelengths from other instruments. It is completely menu-driven so that its use requires no knowledge of IDL. Its functionality includes I/O from the COBE archives, FITS files, and IDL save sets as well as standard analysis operations such as smoothing, reprojection, zooming, statistics of areas, spectral analysis, etc. One of UIMAGE's more advanced and attractive features is its terminal independence. Most of the operations (e.g., menu-item selection or pixel selection) that are driven by the mouse on an X-windows terminal are also available using arrow keys and keyboard entry (e.g., pixel coordinates) on VT200 and Tektronix-class terminals. Even limited grey scales of images are available this way. Obviously, image processing is very limited on this type of terminal, but it is nonetheless surprising how much analysis can be done on that medium. Such flexibility has the virtue of expanding the user community to those who must work remotely on non-image terminals, e.g., via modem.
Fallati, Luca; Savini, Alessandra; Sterlacchini, Simone; Galli, Paolo
2017-08-01
The Maldives islands in recent decades have experienced dramatic land-use change. Uninhabited islands were turned into new resort islands; evergreen tropical forests were cut, to be replaced by fields and new built-up areas. All these changes happened without a proper monitoring and urban planning strategy from the Maldivian government due to the lack of national land-use and land-cover (LULC) data. This study aimed to realize the first land-use map of the entire Maldives archipelago and to detect land-use and land-cover change (LULCC) using high-resolution satellite images and socioeconomic data. Due to the peculiar geographic and environmental features of the archipelago, the land-use map was obtained by visual interpretation and manual digitization of land-use patches. The images used, dated 2011, were obtained from Digital Globe's WorldView 1 and WorldView 2 satellites. Nine land-use classes and 18 subclasses were identified and mapped. During a field survey, ground control points were collected to test the geographic and thematic accuracy of the land-use map. The final product's overall accuracy was 85%. Once the accuracy of the map had been checked, LULCC maps were created using images from the early 2000s derived from Google Earth historical imagery. Post-classification comparison of the classified maps showed that growth of built-up and agricultural areas resulted in decreases in forest land and shrubland. The LULCC maps also revealed an increase in land reclamation inside lagoons near inhabited islands, resulting in environmental impacts on fragile reef habitat. The LULC map of the Republic of the Maldives produced in this study can be used by government authorities to make sustainable land-use planning decisions and to provide better management of land use and land cover.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
The Advantage of the Second Military Survey in Fluvial Measures
NASA Astrophysics Data System (ADS)
Kovács, G.
2009-04-01
The Second Military Survey of the Habsburg Empire, completed in the 19th century, can be very useful in different scientific investigations owing to its accuracy and data content. The fact, that the mapmakers used geodetic projection, and the high accuracy of the survey guarantee that scientists can use these maps and the represented objects can be evaluated in retrospective studies. Among others, the hydrological information of the map sheets is valuable. The streams were drawn with very thin lines that also ascertain the high accuracy of their location, provided that the geodetic position of the sheet can be constructed with high accuracy. After geocoding these maps we faced the high accuracy of line elements. Not only the location of these lines but the form of the creeks are usually almost the same as recent shape. The goal of our study was the neotectonic evaluation of the western part of the Pannonian Basin, bordered by Pinka, Rába and Répce Rivers. The watercourses, especially alluvial ones, react very sensitively to tectonic forcing. However, the present-day course of the creeks and rivers are mostly regulated, therefore they are unsuitable for such studies. Consequently, the watercourses should be reconstructed from maps surveyed prior to the main water control measures. The Second Military Survey is a perfect source for such studies because it is the first survey has drawn in geodetic projection but the creeks haven't been regulated yet. The maps show intensive agricultural cultivation and silviculture in the study area. Especially grazing cultivation precincts of the streams is important for us. That phenomenon and data from other sources prove that the streams haven't been regulated in that time. The streams were able to meander, and flood its banks, and only natural levees are present. The general morphology south from the Kőszegi Mountains shows typical SSE slopes with low relief cut off by 30-60 meter high scarps followed by streams. That suggested us to investigate the neotectonic features, what also indicated by the alternate meandering of surveyed streams. After geocoding the maps of the area, the streams were digitised, and their sinuosity values were calculated. At places significant difference of sinuosity has been observed along the streams, it can be considered as indicators of differential uplift or subsidence of the bedrock/alluvium. This method can be useful in general, if the watercourses mapped in the historical map are assumed to be unaffected by human.
Technical Report: Unmanned Helicopter Solution for Survey-Grade Lidar and Hyperspectral Mapping
NASA Astrophysics Data System (ADS)
Kaňuk, Ján; Gallay, Michal; Eck, Christoph; Zgraggen, Carlo; Dvorný, Eduard
2018-05-01
Recent development of light-weight unmanned airborne vehicles (UAV) and miniaturization of sensors provide new possibilities for remote sensing and high-resolution mapping. Mini-UAV platforms are emerging, but powerful UAV platforms of higher payload capacity are required to carry the sensors for survey-grade mapping. In this paper, we demonstrate a technological solution and application of two different payloads for highly accurate and detailed mapping. The unmanned airborne system (UAS) comprises a Scout B1-100 autonomously operating UAV helicopter powered by a gasoline two-stroke engine with maximum take-off weight of 75 kg. The UAV allows for integrating of up to 18 kg of a customized payload. Our technological solution comprises two types of payload completely independent of the platform. The first payload contains a VUX-1 laser scanner (Riegl, Austria) and a Sony A6000 E-Mount photo camera. The second payload integrates a hyperspectral push-broom scanner AISA Kestrel 10 (Specim, Finland). The two payloads need to be alternated if mapping with both is required. Both payloads include an inertial navigation system xNAV550 (Oxford Technical Solutions Ltd., United Kingdom), a separate data link, and a power supply unit. Such a constellation allowed for achieving high accuracy of the flight line post-processing in two test missions. The standard deviation was 0.02 m (XY) and 0.025 m (Z), respectively. The intended application of the UAS was for high-resolution mapping and monitoring of landscape dynamics (landslides, erosion, flooding, or crops growth). The legal regulations for such UAV applications in Switzerland and Slovakia are also discussed.
space Radar Image of Long Valley, California
1999-05-01
An area near Long Valley, California, was mapped by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavor on April 13, 1994, during the first flight of the radar instrument, and on October 4, 1994, during the second flight of the radar instrument. The orbital configurations of the two data sets were ideal for interferometric combination -- that is overlaying the data from one image onto a second image of the same area to create an elevation map and obtain estimates of topography. Once the topography is known, any radar-induced distortions can be removed and the radar data can be geometrically projected directly onto a standard map grid for use in a geographical information system. The 50 kilometer by 50 kilometer (31 miles by 31 miles) map shown here is entirely derived from SIR-C L-band radar (horizontally transmitted and received) results. The color shown in this image is produced from the interferometrically determined elevations, while the brightness is determined by the radar backscatter. The map is in Universal Transverse Mercator (UTM) coordinates. Elevation contour lines are shown every 50 meters (164 feet). Crowley Lake is the dark feature near the south edge of the map. The Adobe Valley in the north and the Long Valley in the south are separated by the Glass Mountain Ridge, which runs through the center of the image. The height accuracy of the interferometrically derived digital elevation model is estimated to be 20 meters (66 feet) in this image. http://photojournal.jpl.nasa.gov/catalog/PIA01749
Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.
Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu
2018-08-01
To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Poulton, C. E.; Faulkner, D. P.; Johnson, J. R.; Mouat, D. A.; Schrumpf, B. J.
1971-01-01
A high altitude photomosaic resource map of Site 29 was produced which provided an opportunity to test photo interpretation accuracy of natural vegetation resource features when mapped at a small (1:133,400) scale. Helicopter reconnaissance over 144 previously selected test points revealed a highly adequate level of photo interpretation accuracy. In general, the reasons for errors could be accounted for. The same photomosaic resource map enabled construction of interpretive land use overlays. Based on features of the landscape, including natural vegetation types, judgements for land use suitability were made and have been presented for two types of potential land use. These two, agriculture and urbanization, represent potential land use conflicts.
D'Iorio, M.; Jupiter, S.D.; Cochran, S.A.; Potts, D.C.
2007-01-01
In 1902, the Florida red mangrove, Rhizophora mangle L., was introduced to the island of Molokai, Hawaii, and has since colonized nearly 25% of the south coast shoreline. By classifying three kinds of remote sensing imagery, we compared abilities to detect invasive mangrove distributions and to discriminate mangroves from surrounding terrestrial vegetation. Using three analytical techniques, we compared mangrove mapping accuracy for various sensor-technique combinations. ANOVA of accuracy assessments demonstrated significant differences among techniques, but no significant differences among the three sensors. We summarize advantages and disadvantages of each sensor and technique for mapping mangrove distributions in tropical coastal environments.
Alexandridis, Thomas K; Tamouridou, Afroditi Alexandra; Pantazi, Xanthoula Eirini; Lagopodi, Anastasia L; Kashefi, Javid; Ovakoglou, Georgios; Polychronos, Vassilios; Moshou, Dimitrios
2017-09-01
In the present study, the detection and mapping of Silybum marianum (L.) Gaertn. weed using novelty detection classifiers is reported. A multispectral camera (green-red-NIR) on board a fixed wing unmanned aerial vehicle (UAV) was employed for obtaining high-resolution images. Four novelty detection classifiers were used to identify S. marianum between other vegetation in a field. The classifiers were One Class Support Vector Machine (OC-SVM), One Class Self-Organizing Maps (OC-SOM), Autoencoders and One Class Principal Component Analysis (OC-PCA). As input features to the novelty detection classifiers, the three spectral bands and texture were used. The S. marianum identification accuracy using OC-SVM reached an overall accuracy of 96%. The results show the feasibility of effective S. marianum mapping by means of novelty detection classifiers acting on multispectral UAV imagery.
NASA Astrophysics Data System (ADS)
Luo, Juhua; Duan, Hongtao; Ma, Ronghua; Jin, Xiuliang; Li, Fei; Hu, Weiping; Shi, Kun; Huang, Wenjiang
2017-05-01
Spatial information of the dominant species of submerged aquatic vegetation (SAV) is essential for restoration projects in eutrophic lakes, especially eutrophic Taihu Lake, China. Mapping the distribution of SAV species is very challenging and difficult using only multispectral satellite remote sensing. In this study, we proposed an approach to map the distribution of seven dominant species of SAV in Taihu Lake. Our approach involved information on the life histories of the seven SAV species and eight distribution maps of SAV from February to October. The life history information of the dominant SAV species was summarized from the literature and field surveys. Eight distribution maps of the SAV were extracted from eight 30 m HJ-CCD images from February to October in 2013 based on the classification tree models, and the overall classification accuracies for the SAV were greater than 80%. Finally, the spatial distribution of the SAV species in Taihu in 2013 was mapped using multilayer erasing approach. Based on validation, the overall classification accuracy for the seven species was 68.4%, and kappa was 0.6306, which suggests that larger differences in life histories between species can produce higher identification accuracies. The classification results show that Potamogeton malaianus was the most widely distributed species in Taihu Lake, followed by Myriophyllum spicatum, Potamogeton maackianus, Potamogeton crispus, Elodea nuttallii, Ceratophyllum demersum and Vallisneria spiralis. The information is useful for planning shallow-water habitat restoration projects.
Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.
2013-01-01
Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805
Lee, Chang Min; Park, Sungsoo; Park, Seong-Heum; Jung, Sung Woo; Choe, Jung Wan; Sul, Ji-Young; Jang, You Jin; Mok, Young-Jae; Kim, Jong-Han
2017-04-01
The aim of this study was to investigate the feasibility of sentinel node mapping using a fluorescent dye and visible light in patients with gastric cancer. Recently, fluorescent imaging technology offers improved visibility with the possibility of better sensitivity or accuracy in sentinel node mapping. Twenty patients with early gastric cancer, for whom laparoscopic distal gastrectomy with standard lymphadenectomy had been planned, were enrolled in this study. Before lymphadenectomy, the patients received a gastrofiberoscopic peritumoral injection of fluorescein solution. The sentinel basin was investigated via laparoscopic fluorescent imaging under blue light (wavelength of 440-490 nm) emitted from an LED curing light. The detection rate and lymph node status were analyzed in the enrolled patients. In addition, short-term clinical outcomes were also investigated. No hypersensitivity to the dye was identified in any enrolled patients. Sentinel nodes were detected in 19 of 20 enrolled patients (95.0%), and metastatic lymph nodes were found in 2 patients. The latter lymph nodes belonged to the sentinel basin of each patient. Meanwhile, 1 patient (5.0%) experienced a postoperative complication that was unrelated to sentinel node mapping. No mortality was recorded among enrolled cases. Sentinel node mapping with visible light fluorescence was a feasible method for visualizing sentinel nodes in patients with early gastric cancer. In addition, this method is advantageous in terms of visualizing the concrete relationship between the sentinel nodes and surrounding structures.
Language mapping with verbs and sentences in awake surgery: a review.
Rofes, Adrià; Miceli, Gabriele
2014-06-01
Intraoperative language mapping in awake surgery is typically conducted by asking the patient to produce automatic speech and to name objects. These tasks might not map language with sufficient accuracy, as some linguistic processes can only be triggered by tasks that use verbs and sentences. Verb and sentence processing tasks are currently used during surgery, albeit sparsely. Medline, PubMed, and Web of Science records were searched to retrieve studies focused on language mapping with verbs/sentences in awake surgery. We review the tasks reported in the published literature, spell out the language processes assessed by each task, list the cortical and subcortical regions whose stimulation inhibited language processing, and consider the types of errors elicited by stimulation in each region. We argue that using verb tasks allows a more thorough evaluation of language functions. We also argue that verb tasks are preferable to object naming tasks in the case of frontal lesions, as lesion and neuroimaging data demonstrate that these regions play a critical role in verb and sentence processing. We discuss the clinical value of these tasks and the current limitations of the procedure, and provide some guidelines for their development. Future research should aim toward a differentiated approach to language mapping - one that includes the administration of standardized and customizable tests and the use of longitudinal neurocognitive follow-up studies. Further work will allow researchers and clinicians to understand brain and language correlates and to improve the current surgical practice.
A method for mapping corn using the US Geological Survey 1992 National Land Cover Dataset
Maxwell, S.K.; Nuckols, J.R.; Ward, M.H.
2006-01-01
Long-term exposure to elevated nitrate levels in community drinking water supplies has been associated with an elevated risk of several cancers including non-Hodgkin's lymphoma, colon cancer, and bladder cancer. To estimate human exposure to nitrate, specific crop type information is needed as fertilizer application rates vary widely by crop type. Corn requires the highest application of nitrogen fertilizer of crops grown in the Midwest US. We developed a method to refine the US Geological Survey National Land Cover Dataset (NLCD) (including map and original Landsat images) to distinguish corn from other crops. Overall average agreement between the resulting corn and other row crops class and ground reference data was 0.79 kappa coefficient with individual Landsat images ranging from 0.46 to 0.93 kappa. The highest accuracies occurred in Regions where corn was the single dominant crop (greater than 80.0%) and the crop vegetation conditions at the time of image acquisition were optimum for separation of corn from all other crops. Factors that resulted in lower accuracies included the accuracy of the NLCD map, accuracy of corn areal estimates, crop mixture, crop condition at the time of Landsat overpass, and Landsat scene anomalies.
Calculation of laser pulse distribution maps for corneal reshaping with a scanning beam
NASA Astrophysics Data System (ADS)
Manns, Fabrice; Shen, Jin-Hui; Soederberg, Per G.; Matsui, Takaaki; Parel, Jean-Marie A.
1995-05-01
A method for calculating pulse distribution maps for scanning laser corneal surgery is presented. The accuracy, the smoothness of the corneal shape, and the duration of surgery were evaluated for corrections of myopia by using computer simulations. The accuracy and the number of pulses were computed as a function of the beam diameter, the diameter of the treatment zone, and the amount of attempted flattening. The ablation is smooth when the spot overlap is 80% or more. The accuracy does not depend on the beam diameter or on the diameter of the ablation zone when the ablation zone is larger than 5 mm. With an overlap of 80% and an ablation zone larger than 5 mm, the error is 5% of the attempted flattening, and 610 pulses are needed per Diopter of correction with a beam diameter of 1 mm. Pulse maps for the correction of astigmatism were computed and evaluated. The simulations show that with 60% overlap, a beam diameter of 1 mm, and a 5 mm treatment zone, 6 D of astigmatism can be corrected with an accuracy better than 1.8 D. This study shows that smooth and accurate ablations can be produced with a scanning spot.
Shafizadeh-Moghadam, Hossein; Tayyebi, Amin; Helbich, Marco
2017-06-01
Transition index maps (TIMs) are key products in urban growth simulation models. However, their operationalization is still conflicting. Our aim was to compare the prediction accuracy of three TIM-based spatially explicit land cover change (LCC) models in the mega city of Mumbai, India. These LCC models include two data-driven approaches, namely artificial neural networks (ANNs) and weight of evidence (WOE), and one knowledge-based approach which integrates an analytical hierarchical process with fuzzy membership functions (FAHP). Using the relative operating characteristics (ROC), the performance of these three LCC models were evaluated. The results showed 85%, 75%, and 73% accuracy for the ANN, FAHP, and WOE. The ANN was clearly superior compared to the other LCC models when simulating urban growth for the year 2010; hence, ANN was used to predict urban growth for 2020 and 2030. Projected urban growth maps were assessed using statistical measures, including figure of merit, average spatial distance deviation, producer accuracy, and overall accuracy. Based on our findings, we recomend ANNs as an and accurate method for simulating future patterns of urban growth.
Map-based trigonometric parallaxes of open clusters - The Pleiades
NASA Technical Reports Server (NTRS)
Gatewood, George; Castelaz, Michael; Han, Inwoo; Persinger, Timothy; Stein, John
1990-01-01
The multichannel astrometric photometer and Thaw refractor of the University of Pittsburgh's Allegheny Observatory have been used to determine the trigonometric parallax of the Pleiades star cluster. The distance determined, 150 with a standard error of 18 parsecs, places the cluster slightly farther away than generally accepted. This suggests that the basis of many estimations of the cosmic distance scale is approximately 20 percent short. The accuracy of the determination is limited by the number and choice of reference stars. With careful attention to the selection of reference stars in several Pleiades regions, it should be possible to examine differences in the photometric and trigonometric modulus at a precision of 0.1 magnitudes.
Life on the Edge of Chaos: Orbital Mechanics and Symplectic Integration
NASA Astrophysics Data System (ADS)
Newman, William I.; Hyman, James M.
1998-09-01
Symplectic mapping techniques have become very popular among celestial mechanicians and molecular dynamicists. The word "symplectic" was coined by Hermann Weyl (1939), exploiting the Greek root for a word meaning "complex," to describe a Lie group with special geometric properties. A symplectic integration method is one whose time-derivative satisfies Hamilton's equations of motion (Goldstein, 1980). When due care is paid to the standard computational triad of consistency, accuracy, and stability, a numerical method that is also symplectic offers some potential advantages. Varadarajan (1974) at UCLA was the first to formally explore, for a very restrictive class of problems, the geometric implications of symplectic splittings through the use of Lie series and group representations. Over the years, however, a "mythology" has emerged regarding the nature of symplectic mappings and what features are preserved. Some of these myths have already been shattered by the computational mathematics community. These results, together with new ones we present here for the first time, show where important pitfalls and misconceptions reside. These misconceptions include that: (a) symplectic maps preserve conserved quantities like the energy; (b) symplectic maps are equivalent to the exact computation of the trajectory of a nearby, time-independent Hamiltonian; (c) complicated splitting methods (i.e., "maps in composition") are not symplectic; (d) symplectic maps preserve the geometry associated with separatrices and homoclinic points; and (e) symplectic maps possess artificial resonances at triple and quadruple frequencies. We verify, nevertheless, that using symplectic methods together with traditional safeguards, e.g. convergence and scaling checks using reduced step sizes for integration schemes of sufficient order, can provide an important exploratory and development tool for Solar System applications.
Improved methods for multi-trait fine mapping of pleiotropic risk loci.
Kichaev, Gleb; Roytman, Megan; Johnson, Ruth; Eskin, Eleazar; Lindström, Sara; Kraft, Peter; Pasaniuc, Bogdan
2017-01-15
Genome-wide association studies (GWAS) have identified thousands of regions in the genome that contain genetic variants that increase risk for complex traits and diseases. However, the variants uncovered in GWAS are typically not biologically causal, but rather, correlated to the true causal variant through linkage disequilibrium (LD). To discern the true causal variant(s), a variety of statistical fine-mapping methods have been proposed to prioritize variants for functional validation. In this work we introduce a new approach, fastPAINTOR, that leverages evidence across correlated traits, as well as functional annotation data, to improve fine-mapping accuracy at pleiotropic risk loci. To improve computational efficiency, we describe an new importance sampling scheme to perform model inference. First, we demonstrate in simulations that by leveraging functional annotation data, fastPAINTOR increases fine-mapping resolution relative to existing methods. Next, we show that jointly modeling pleiotropic risk regions improves fine-mapping resolution compared to standard single trait and pleiotropic fine mapping strategies. We report a reduction in the number of SNPs required for follow-up in order to capture 90% of the causal variants from 23 SNPs per locus using a single trait to 12 SNPs when fine-mapping two traits simultaneously. Finally, we analyze summary association data from a large-scale GWAS of lipids and show that these improvements are largely sustained in real data. The fastPAINTOR framework is implemented in the PAINTOR v3.0 package which is publicly available to the research community http://bogdan.bioinformatics.ucla.edu/software/paintor CONTACT: gkichaev@ucla.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Sokolova, N.; Morrison, A.; Haakonsen, T. A.
2015-04-01
Recent advancement of land-based mobile mapping enables rapid and cost-effective collection of highquality road related spatial information. Mobile Mapping Systems (MMS) can provide spatial information with subdecimeter accuracy in nominal operation environments. However, performance in challenging environments such as tunnels is not well characterized. The Norwegian Public Roads Administration (NPRA) manages the country's public road network and its infrastructure, a large segment of which is represented by road tunnels (there are about 1 000 road tunnels in Norway with a combined length of 800 km). In order to adopt mobile mapping technology for streamlining road network and infrastructure management and maintenance tasks, it is important to ensure that the technology is mature enough to meet existing requirements for object positioning accuracy in all types of environments, and provide homogeneous accuracy over the mapping perimeter. This paper presents results of a testing campaign performed within a project funded by the NPRA as a part of SMarter road traffic with Intelligent Transport Systems (ITS) (SMITS) program. The testing campaign objective was performance evaluation of high end commercial MMSs for inventory of public areas, focusing on Global Navigation Satellite System (GNSS) signal degraded environments.
An automated approach for mapping persistent ice and snow cover over high latitude regions
Selkowitz, David J.; Forster, Richard R.
2016-01-01
We developed an automated approach for mapping persistent ice and snow cover (glaciers and perennial snowfields) from Landsat TM and ETM+ data across a variety of topography, glacier types, and climatic conditions at high latitudes (above ~65°N). Our approach exploits all available Landsat scenes acquired during the late summer (1 August–15 September) over a multi-year period and employs an automated cloud masking algorithm optimized for snow and ice covered mountainous environments. Pixels from individual Landsat scenes were classified as snow/ice covered or snow/ice free based on the Normalized Difference Snow Index (NDSI), and pixels consistently identified as snow/ice covered over a five-year period were classified as persistent ice and snow cover. The same NDSI and ratio of snow/ice-covered days to total days thresholds applied consistently across eight study regions resulted in persistent ice and snow cover maps that agreed closely in most areas with glacier area mapped for the Randolph Glacier Inventory (RGI), with a mean accuracy (agreement with the RGI) of 0.96, a mean precision (user’s accuracy of the snow/ice cover class) of 0.92, a mean recall (producer’s accuracy of the snow/ice cover class) of 0.86, and a mean F-score (a measure that considers both precision and recall) of 0.88. We also compared results from our approach to glacier area mapped from high spatial resolution imagery at four study regions and found similar results. Accuracy was lowest in regions with substantial areas of debris-covered glacier ice, suggesting that manual editing would still be required in these regions to achieve reasonable results. The similarity of our results to those from the RGI as well as glacier area mapped from high spatial resolution imagery suggests it should be possible to apply this approach across large regions to produce updated 30-m resolution maps of persistent ice and snow cover. In the short term, automated PISC maps can be used to rapidly identify areas where substantial changes in glacier area have occurred since the most recent conventional glacier inventories, highlighting areas where updated inventories are most urgently needed. From a longer term perspective, the automated production of PISC maps represents an important step toward fully automated glacier extent monitoring using Landsat or similar sensors.
NASA Technical Reports Server (NTRS)
Xiong, Jun; Thenkabail, Prasad S.; Tilton, James C.; Gumma, Murali K.; Teluguntla, Pardhasaradhi; Oliphant, Adam; Congalton, Russell G.; Yadav, Kamini; Gorelick, Noel
2017-01-01
A satellite-derived cropland extent map at high spatial resolution (30-m or better) is a must for food and water security analysis. Precise and accurate global cropland extent maps, indicating cropland and non-cropland areas, is a starting point to develop high-level products such as crop watering methods (irrigated or rainfed), cropping intensities (e.g., single, double, or continuous cropping), crop types, cropland fallows, as well as assessment of cropland productivity (productivity per unit of land), and crop water productivity (productivity per unit of water). Uncertainties associated with the cropland extent map have cascading effects on all higher-level cropland products. However, precise and accurate cropland extent maps at high spatial resolution over large areas (e.g., continents or the globe) are challenging to produce due to the small-holder dominant agricultural systems like those found in most of Africa and Asia. Cloud-based Geospatial computing platforms and multi-date, multi-sensor satellite image inventories on Google Earth Engine offer opportunities for mapping croplands with precision and accuracy over large areas that satisfy the requirements of broad range of applications. Such maps are expected to provide highly significant improvements compared to existing products, which tend to be coarser in resolution, and often fail to capture fragmented small-holder farms especially in regions with high dynamic change within and across years. To overcome these limitations, in this research we present an approach for cropland extent mapping at high spatial resolution (30-m or better) using the 10-day, 10 to 20-m, Sentinel-2 data in combination with 16-day, 30-m, Landsat-8 data on Google Earth Engine (GEE). First, nominal 30-m resolution satellite imagery composites were created from 36,924 scenes of Sentinel-2 and Landsat-8 images for the entire African continent in 2015-2016. These composites were generated using a median-mosaic of five bands (blue, green, red, near-infrared, NDVI) during each of the two periods (period 1: January-June 2016 and period 2: July-December 2015) plus a 30-m slope layer derived from the Shuttle Radar Topographic Mission (SRTM) elevation dataset. Second, we selected Cropland/Non-cropland training samples (sample size 9791) from various sources in GEE to create pixel-based classifications. As supervised classification algorithm, Random Forest (RF) was used as the primary classifier because of its efficiency, and when over-fitting issues of RF happened due to the noise of input training data, Support Vector Machine (SVM) was applied to compensate for such defects in specific areas. Third, the Recursive Hierarchical Segmentation (RHSeg) algorithm was employed to generate an object-oriented segmentation layer based on spectral and spatial properties from the same input data. This layer was merged with the pixel-based classification to improve segmentation accuracy. Accuracies of the merged 30-m crop extent product were computed using an error matrix approach in which 1754 independent validation samples were used. In addition, a comparison was performed with other available cropland maps as well as with LULC maps to show spatial similarity. Finally, the cropland area results derived from the map were compared with UN FAO statistics. The independent accuracy assessment showed a weighted overall accuracy of 94, with a producers accuracy of 85.9 (or omission error of 14.1), and users accuracy of 68.5 (commission error of 31.5) for the cropland class. The total net cropland area (TNCA) of Africa was estimated as 313 Mha for the nominal year 2015.
Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J
2016-10-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.
2016-01-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Melissa A. Thomas-Van Gundy
2014-01-01
LANDFIRE maps of fire regime groups are frequently used by land managers to help plan and execute prescribed burns for ecosystem restoration. Since LANDFIRE maps are generally applicable at coarse scales, questions often arise regarding their utility and accuracy. Here, the two recently published products from West Virginia, a rule-based and a witness tree-based model...
Evaluation criteria for software classification inventories, accuracies, and maps
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.
Application of terrestrial laser scanning to the development and updating of the base map
NASA Astrophysics Data System (ADS)
Klapa, Przemysław; Mitka, Bartosz
2017-06-01
The base map provides basic information about land to individuals, companies, developers, design engineers, organizations, and government agencies. Its contents include spatial location data for control network points, buildings, land lots, infrastructure facilities, and topographic features. As the primary map of the country, it must be developed in accordance with specific laws and regulations and be continuously updated. The base map is a data source used for the development and updating of derivative maps and other large scale cartographic materials such as thematic or topographic maps. Thanks to the advancement of science and technology, the quality of land surveys carried out by means of terrestrial laser scanning (TLS) matches that of traditional surveying methods in many respects. This paper discusses the potential application of output data from laser scanners (point clouds) to the development and updating of cartographic materials, taking Poland's base map as an example. A few research sites were chosen to present the method and the process of conducting a TLS land survey: a fragment of a residential area, a street, the surroundings of buildings, and an undeveloped area. The entire map that was drawn as a result of the survey was checked by comparing it to a map obtained from PODGiK (pol. Powiatowy Ośrodek Dokumentacji Geodezyjnej i Kartograficznej - Regional Centre for Geodetic and Cartographic Records) and by conducting a field inspection. An accuracy and quality analysis of the conducted fieldwork and deskwork yielded very good results, which provide solid grounds for predicating that cartographic materials based on a TLS point cloud are a reliable source of information about land. The contents of the map that had been created with the use of the obtained point cloud were very accurately located in space (x, y, z). The conducted accuracy analysis and the inspection of the performed works showed that high quality is characteristic of TLS surveys. The accuracy of determining the location of the various map contents has been estimated at 0.02-0.03 m. The map was developed in conformity with the applicable laws and regulations as well as with best practice requirements.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
Zhang, Geli; Xiao, Xiangming; Dong, Jinwei; Kou, Weili; Jin, Cui; Qin, Yuanwei; Zhou, Yuting; Wang, Jie; Menarguez, Michael Angelo; Biradar, Chandrashekhar
2016-01-01
Knowledge of the area and spatial distribution of paddy rice is important for assessment of food security, management of water resources, and estimation of greenhouse gas (methane) emissions. Paddy rice agriculture has expanded rapidly in northeastern China in the last decade, but there are no updated maps of paddy rice fields in the region. Existing algorithms for identifying paddy rice fields are based on the unique physical features of paddy rice during the flooding and transplanting phases and use vegetation indices that are sensitive to the dynamics of the canopy and surface water content. However, the flooding phenomena in high latitude area could also be from spring snowmelt flooding. We used land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to determine the temporal window of flooding and rice transplantation over a year to improve the existing phenology-based approach. Other land cover types (e.g., evergreen vegetation, permanent water bodies, and sparse vegetation) with potential influences on paddy rice identification were removed (masked out) due to their different temporal profiles. The accuracy assessment using high-resolution images showed that the resultant MODIS-derived paddy rice map of northeastern China in 2010 had a high accuracy (producer and user accuracies of 92% and 96%, respectively). The MODIS-based map also had a comparable accuracy to the 2010 Landsat-based National Land Cover Dataset (NLCD) of China in terms of both area and spatial pattern. This study demonstrated that our improved algorithm by using both thermal and optical MODIS data, provides a robust, simple and automated approach to identify and map paddy rice fields in temperate and cold temperate zones, the northern frontier of rice planting. PMID:27667901
Zhang, Geli; Xiao, Xiangming; Dong, Jinwei; Kou, Weili; Jin, Cui; Qin, Yuanwei; Zhou, Yuting; Wang, Jie; Menarguez, Michael Angelo; Biradar, Chandrashekhar
2015-08-01
Knowledge of the area and spatial distribution of paddy rice is important for assessment of food security, management of water resources, and estimation of greenhouse gas (methane) emissions. Paddy rice agriculture has expanded rapidly in northeastern China in the last decade, but there are no updated maps of paddy rice fields in the region. Existing algorithms for identifying paddy rice fields are based on the unique physical features of paddy rice during the flooding and transplanting phases and use vegetation indices that are sensitive to the dynamics of the canopy and surface water content. However, the flooding phenomena in high latitude area could also be from spring snowmelt flooding. We used land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to determine the temporal window of flooding and rice transplantation over a year to improve the existing phenology-based approach. Other land cover types (e.g., evergreen vegetation, permanent water bodies, and sparse vegetation) with potential influences on paddy rice identification were removed (masked out) due to their different temporal profiles. The accuracy assessment using high-resolution images showed that the resultant MODIS-derived paddy rice map of northeastern China in 2010 had a high accuracy (producer and user accuracies of 92% and 96%, respectively). The MODIS-based map also had a comparable accuracy to the 2010 Landsat-based National Land Cover Dataset (NLCD) of China in terms of both area and spatial pattern. This study demonstrated that our improved algorithm by using both thermal and optical MODIS data, provides a robust, simple and automated approach to identify and map paddy rice fields in temperate and cold temperate zones, the northern frontier of rice planting.
Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data
NASA Technical Reports Server (NTRS)
Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan
2013-01-01
High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.
Classification of urban features using airborne hyperspectral data
NASA Astrophysics Data System (ADS)
Ganesh Babu, Bharath
Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.
Zhou, Tao; Li, Zhaofu; Pan, Jianjun
2018-01-27
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.
Fractional Snow Cover Mapping by Artificial Neural Networks and Support Vector Machines
NASA Astrophysics Data System (ADS)
Çiftçi, B. B.; Kuter, S.; Akyürek, Z.; Weber, G.-W.
2017-11-01
Snow is an important land cover whose distribution over space and time plays a significant role in various environmental processes. Hence, snow cover mapping with high accuracy is necessary to have a real understanding for present and future climate, water cycle, and ecological changes. This study aims to investigate and compare the design and use of artificial neural networks (ANNs) and support vector machines (SVMs) algorithms for fractional snow cover (FSC) mapping from satellite data. ANN and SVM models with different model building settings are trained by using Moderate Resolution Imaging Spectroradiometer surface reflectance values of bands 1-7, normalized difference snow index and normalized difference vegetation index as predictor variables. Reference FSC maps are generated from higher spatial resolution Landsat ETM+ binary snow cover maps. Results on the independent test data set indicate that the developed ANN model with hyperbolic tangent transfer function in the output layer and the SVM model with radial basis function kernel produce high FSC mapping accuracies with the corresponding values of R = 0.93 and R = 0.92, respectively.
NASA Technical Reports Server (NTRS)
Sekhon, R.
1981-01-01
Digital SEASAT-1 synthetic aperture radar (SAR) data were used to enhance linear features to extract geologically significant lineaments in the Appalachian region. Comparison of Lineaments thus mapped with an existing lineament map based on LANDSAT MSS images shows that appropriately processed SEASAT-1 SAR data can significantly improve the detection of lineaments. Merge MSS and SAR data sets were more useful fo lineament detection and landcover classification than LANDSAT or SEASAT data alone. About 20 percent of the lineaments plotted from the SEASAT SAR image did not appear on the LANDSAT image. About 6 percent of minor lineaments or parts of lineaments present in the LANDSAT map were missing from the SEASAT map. Improvement in the landcover classification (acreage and spatial estimation accuracy) was attained by using MSS-SAR merged data. The aerial estimation of residential/built-up and forest categories was improved. Accuracy in estimating the agricultural and water categories was slightly reduced.
Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques
NASA Technical Reports Server (NTRS)
Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val
2001-01-01
Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.
NASA Astrophysics Data System (ADS)
Mayr, W.
2011-09-01
This paper reports on first hand experiences in operating an unmanned airborne system (UAS) for mapping purposes in the environment of a mapping company. Recently, a multitude of activities in UAVs is visible, and there is growing interest in the commercial, industrial, and academic mapping user communities and not only in those. As an introduction, the major components of an UAS are identified. The paper focuses on a 1.1kg UAV which is integrated and gets applied on a day-to-day basis as part of an UAS in standard aerial imaging tasks for more than two years already. We present the unmanned airborne vehicle in some detail as well as the overall system components such as autopilot, ground station, flight mission planning and control, and first level image processing. The paper continues with reporting on experiences gained in setting up constraints such a system needs to fulfill. Further on, operational aspects with emphasis on unattended flight mission mode are presented. Various examples show the applicability of UAS in geospatial tasks, proofing that UAS are capable delivering reliably e.g. orthomosaics, digital surface models and more. Some remarks on achieved accuracies give an idea on obtainable qualities. A discussion about safety features puts some light on important matters when entering unmanned flying activities and rounds up this paper. Conclusions summarize the state of the art of an operational UAS from the point of the view of the author.
Multiscale reconstruction for MR fingerprinting.
Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A
2016-06-01
To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.