Reference-free ground truth metric for metal artifact evaluation in CT images.
Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M
2011-07-01
In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.
Community annotation experiment for ground truth generation for the i2b2 medication challenge
Solti, Imre; Xia, Fei; Cadag, Eithon
2010-01-01
Objective Within the context of the Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records, the authors (also referred to as ‘the i2b2 medication challenge team’ or ‘the i2b2 team’ for short) organized a community annotation experiment. Design For this experiment, the authors released annotation guidelines and a small set of annotated discharge summaries. They asked the participants of the Third i2b2 Workshop to annotate 10 discharge summaries per person; each discharge summary was annotated by two annotators from two different teams, and a third annotator from a third team resolved disagreements. Measurements In order to evaluate the reliability of the annotations thus produced, the authors measured community inter-annotator agreement and compared it with the inter-annotator agreement of expert annotators when both the community and the expert annotators generated ground truth based on pooled system outputs. For this purpose, the pool consisted of the three most densely populated automatic annotations of each record. The authors also compared the community inter-annotator agreement with expert inter-annotator agreement when the experts annotated raw records without using the pool. Finally, they measured the quality of the community ground truth by comparing it with the expert ground truth. Results and conclusions The authors found that the community annotators achieved comparable inter-annotator agreement to expert annotators, regardless of whether the experts annotated from the pool. Furthermore, the ground truth generated by the community obtained F-measures above 0.90 against the ground truth of the experts, indicating the value of the community as a source of high-quality ground truth even on intricate and domain-specific annotation tasks. PMID:20819855
AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"
A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...
First- and third-party ground truth for key frame extraction from consumer video clips
NASA Astrophysics Data System (ADS)
Costello, Kathleen; Luo, Jiebo
2007-02-01
Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.
NASA Technical Reports Server (NTRS)
Jones, E. B.
1983-01-01
As remote sensing increasingly becomes more of an operational tool in the field of snow management and snow hydrology, there is need for some degree of standardization of ""snowpack ground truth'' techniques. This manual provides a first step in standardizing these procedures and was prepared to meet the needs of remote sensing researchers in planning missions requiring ground truth as well as those providing the ground truth. Focus is on ground truth for remote sensors primarily operating in the microwave portion of the electromagnetic spectrum; nevertheless, the manual should be of value to other types of sensor programs. This first edition of ground truth procedures must be updated as new or modified techniques are developed.
On Evaluating Brain Tissue Classifiers without a Ground Truth
Martin-Fernandez, Marcos; Ungar, Lida; Nakamura, Motoaki; Koo, Min-Seong; McCarley, Robert W.; Shenton, Martha E.
2009-01-01
In this paper, we present a set of techniques for the evaluation of brain tissue classifiers on a large data set of MR images of the head. Due to the difficulty of establishing a gold standard for this type of data, we focus our attention on methods which do not require a ground truth, but instead rely on a common agreement principle. Three different techniques are presented: the Williams’ index, a measure of common agreement; STAPLE, an Expectation Maximization algorithm which simultaneously estimates performance parameters and constructs an estimated reference standard; and Multidimensional Scaling, a visualization technique to explore similarity data. We apply these different evaluation methodologies to a set eleven different segmentation algorithms on forty MR images. We then validate our evaluation pipeline by building a ground truth based on human expert tracings. The evaluations with and without a ground truth are compared. Our findings show that comparing classifiers without a gold standard can provide a lot of interesting information. In particular, outliers can be easily detected, strongly consistent or highly variable techniques can be readily discriminated, and the overall similarity between different techniques can be assessed. On the other hand, we also find that some information present in the expert segmentations is not captured by the automatic classifiers, suggesting that common agreement alone may not be sufficient for a precise performance evaluation of brain tissue classifiers. PMID:17532646
Ground-truth collections at the MTI core sites
NASA Astrophysics Data System (ADS)
Garrett, Alfred J.; Kurzeja, Robert J.; Parker, Matthew J.; O'Steen, Byron L.; Pendergast, Malcolm M.; Villa-Aleman, Eliel
2001-08-01
The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.
Ground Truth Studies - A hands-on environmental science program for students, grades K-12
NASA Technical Reports Server (NTRS)
Katzenberger, John; Chappell, Charles R.
1992-01-01
The paper discusses the background and the objectives of the Ground Truth Studies (GTSs), an activity-based teaching program which integrates local environmental studies with global change topics, utilizing remotely sensed earth imagery. Special attention is given to the five key concepts around which the GTS programs are organized, the pilot program, the initial pilot study evaluation, and the GTS Handbook. The GTS Handbook contains a primer on global change and remote sensing, aerial and satellite images, student activities, glossary, and an appendix of reference material. Also described is a K-12 teacher training model. International participation in the program is to be initiated during the 1992-1993 school year.
Ground truth management system to support multispectral scanner /MSS/ digital analysis
NASA Technical Reports Server (NTRS)
Coiner, J. C.; Ungar, S. G.
1977-01-01
A computerized geographic information system for management of ground truth has been designed and implemented to relate MSS classification results to in situ observations. The ground truth system transforms, generalizes and rectifies ground observations to conform to the pixel size and shape of high resolution MSS aircraft data. These observations can then be aggregated for comparison to lower resolution sensor data. Construction of a digital ground truth array allows direct pixel by pixel comparison between classification results of MSS data and ground truth. By making comparisons, analysts can identify spatial distribution of error within the MSS data as well as usual figures of merit for the classifications. Use of the ground truth system permits investigators to compare a variety of environmental or anthropogenic data, such as soil color or tillage patterns, with classification results and allows direct inclusion of such data into classification operations. To illustrate the system, examples from classification of simulated Thematic Mapper data for agricultural test sites in North Dakota and Kansas are provided.
Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds
USDA-ARS?s Scientific Manuscript database
The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...
Initial validation of the Soil Moisture Active Passive mission using USDA-ARS watersheds
USDA-ARS?s Scientific Manuscript database
The Soil Moisture Active Passive (SMAP) Mission was launched in January 2015 to measure global surface soil moisture. The calibration and validation program of SMAP relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The U...
Development of mine explosion ground truth smart sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Steven R.; Harben, Phillip E.; Jarpe, Steve
Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form themore » basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business plan). Key parameters to be automatically determined are event origin time (within 0.1 sec), location (within 1 km) and size (within 0.3 magnitude units) without any human intervention. The key parameter ground truth information from explosions greater than magnitude 2.5 will be transmitted to a recording and transmitting site. Because we have identified a limited bandwidth, inexpensive two-way satellite communication (ORBCOMM), we have devised the concept of an accompanying Ground-Truth Processing Center that would enable calibration and ground-truth accuracy to improve over the duration of a deployment.« less
Localizing Ground Penetrating RADAR: A Step Towards Robust Autonomous Ground Vehicle Localization
2015-05-27
truth reference unit is coupled with a local base station that allows local 2cm accuracy location measurements. The RT3003 uses a MEMS -based IMU and...of different electromagnetic properties; for example the interface between soil and pipes , roots, or rocks. However, it is not these discrete...depth is determined by soil losses caused by Joule heating and dipole losses. High conductivity soils, such as those with high moisture and salinity
Ground truth crop proportion summaries for US segments, 1976-1979
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.
1981-01-01
The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.
Rice Crop Monitoring Using Microwave and Optical Remotely Sensed Image Data
NASA Astrophysics Data System (ADS)
Suga, Y.; Konishi, T.; Takeuchi, S.; Kitano, Y.; Ito, S.
Hiroshima Institute of Technology HIT is operating the direct down-links of microwave and optical satellite data in Japan This study focuses on the validation for rice crop monitoring using microwave and optical remotely sensed image data acquired by satellites referring to ground truth data such as height of crop ratio of crop vegetation cover and leaf area index in the test sites of Japan ENVISAT-1 ASAR data has a capability to capture regularly and to monitor during the rice growing cycle by alternating cross polarization mode images However ASAR data is influenced by several parameters such as landcover structure direction and alignment of rice crop fields in the test sites In this study the validation was carried out combined with microwave and optical satellite image data and ground truth data regarding rice crop fields to investigate the above parameters Multi-temporal multi-direction descending and ascending and multi-angle ASAR alternating cross polarization mode images were used to investigate rice crop growing cycle LANDSAT data were used to detect landcover structure direction and alignment of rice crop fields corresponding to the backscatter of ASAR As the result of this study it was indicated that rice crop growth can be precisely monitored using multiple remotely sensed data and ground truth data considering with spatial spectral temporal and radiometric resolutions
Caspi, Caitlin Eicher; Friebur, Robin
2016-03-17
A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.
Dsm Based Orientation of Large Stereo Satellite Image Blocks
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Reinartz, P.
2012-07-01
High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.
Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V
2017-01-01
Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.
Semi-automated based ground-truthing GUI for airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Lydic, Rich; Moore, Tim; Trang, Anh; Agarwal, Sanjeev; Tiwari, Spandan
2005-06-01
Over the past several years, an enormous amount of airborne imagery consisting of various formats has been collected and will continue into the future to support airborne mine/minefield detection processes, improve algorithm development, and aid in imaging sensor development. The ground-truthing of imagery is a very essential part of the algorithm development process to help validate the detection performance of the sensor and improving algorithm techniques. The GUI (Graphical User Interface) called SemiTruth was developed using Matlab software incorporating signal processing, image processing, and statistics toolboxes to aid in ground-truthing imagery. The semi-automated ground-truthing GUI is made possible with the current data collection method, that is including UTM/GPS (Universal Transverse Mercator/Global Positioning System) coordinate measurements for the mine target and fiducial locations on the given minefield layout to support in identification of the targets on the raw imagery. This semi-automated ground-truthing effort has developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division, Airborne Application Branch with some support by the University of Missouri-Rolla.
A novel adaptive scoring system for segmentation validation with multiple reference masks
NASA Astrophysics Data System (ADS)
Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto
2011-03-01
The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.
A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.
2015-11-15
Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less
2010-09-01
MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC EVENTS – UTILIZATION OF GROUND TRUTH INFORMATION Stephen J...and infrasound data from seismo-acoustic arrays and apply the methodology to regional networks for validation with ground truth information. In the...initial year of the project automated techniques for detecting, associating and locating infrasound signals were developed. Recently, the location
Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth
2018-01-01
Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.
Evaluating the state of the art in coreference resolution for electronic medical records
Bodnari, Andreea; Shen, Shuying; Forbush, Tyler; Pestian, John; South, Brett R
2012-01-01
Background The fifth i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records conducted a systematic review on resolution of noun phrase coreference in medical records. Informatics for Integrating Biology and the Bedside (i2b2) and the Veterans Affair (VA) Consortium for Healthcare Informatics Research (CHIR) partnered to organize the coreference challenge. They provided the research community with two corpora of medical records for the development and evaluation of the coreference resolution systems. These corpora contained various record types (ie, discharge summaries, pathology reports) from multiple institutions. Methods The coreference challenge provided the community with two annotated ground truth corpora and evaluated systems on coreference resolution in two ways: first, it evaluated systems for their ability to identify mentions of concepts and to link together those mentions. Second, it evaluated the ability of the systems to link together ground truth mentions that refer to the same entity. Twenty teams representing 29 organizations and nine countries participated in the coreference challenge. Results The teams' system submissions showed that machine-learning and rule-based approaches worked best when augmented with external knowledge sources and coreference clues extracted from document structure. The systems performed better in coreference resolution when provided with ground truth mentions. Overall, the systems struggled in solving coreference resolution for cases that required domain knowledge. PMID:22366294
An Enhanced Collaborative-Software Environment for Information Fusion at the Unit of Action
2007-12-07
GRAY CONVOY RED CONVOY DISMOUNT SA-18 D O DISMOUNT W/ SURVEILANCE EQUIPDISMOUNT UNKNOWN zC-LFFRLIFEFORM < SA-18 FIXED WING CLASSIFIED INFORMATION...Ground Truth Semantic-aggregation hierarchy (evaluation-use only) BSGs GTGs BSOS GTOS Reports Figure 4: Semantic-Aggregation Hierarchy PIR/SIR CIFAR...Finally, GTOs can be aggregated into GTGs (ground-truth groups) using the provided ground-truth force structure hierarchy for GTOs. GTGs can only be
Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada
NASA Technical Reports Server (NTRS)
Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea
1992-01-01
Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.
NASA Astrophysics Data System (ADS)
Munoz, Joshua
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
NASA Technical Reports Server (NTRS)
Edgerton, A. T.; Trexler, D. T.; Sakamoto, S.; Jenkins, J. E.
1969-01-01
The field measurement program conducted at the NASA/USGS Southern California Test Site is discussed. Ground truth data and multifrequency microwave brightness data were acquired by a mobile field laboratory operating in conjunction with airborne instruments. The ground based investigations were performed at a number of locales representing a variety of terrains including open desert, cultivated fields, barren fields, portions of the San Andreas Fault Zone, and the Salton Sea. The measurements acquired ground truth data and microwave brightness data at wavelengths of 0.8 cm, 2.2 cm, and 21 cm.
Relating ground truth collection to model sensitivity
NASA Technical Reports Server (NTRS)
Amar, Faouzi; Fung, Adrian K.; Karam, Mostafa A.; Mougin, Eric
1993-01-01
The importance of collecting high quality ground truth before a SAR mission over a forested area is two fold. First, the ground truth is used in the analysis and interpretation of the measured backscattering properties; second, it helps to justify the use of a scattering model to fit the measurements. Unfortunately, ground truth is often collected based on visual assessment of what is perceived to be important without regard to the mission itself. Sites are selected based on brief surveys of large areas, and the ground truth is collected by a process of selecting and grouping different scatterers. After the fact, it may turn out that some of the relevant parameters are missing. A three-layer canopy model based on the radiative transfer equations is used to determine, before hand, the relevant parameters to be collected. Detailed analysis of the contribution to scattering and attenuation of various forest components is carried out. The goal is to identify the forest parameters which most influence the backscattering as a function of frequency (P-, L-, and C-bands) and incident angle. The influence on backscattering and attenuation of branch diameters, lengths, angular distribution, and permittivity; trunk diameters, lengths, and permittivity; and needle sizes, their angular distribution, and permittivity are studied in order to maximize the efficiency of the ground truth collection efforts. Preliminary results indicate that while a scatterer may not contribute to the total backscattering, its contribution to attenuation may be significant depending on the frequency.
Validation of neural spike sorting algorithms without ground-truth information.
Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F
2016-05-01
The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.
Soffientini, Chiara D; De Bernardi, Elisabetta; Casati, Rosangela; Baselli, Giuseppe; Zito, Felicia
2017-01-01
Design, realization, scan, and characterization of a phantom for PET Automatic Segmentation (PET-AS) assessment are presented. Radioactive zeolites immersed in a radioactive heterogeneous background simulate realistic wall-less lesions with known irregular shape and known homogeneous or heterogeneous internal activity. Three different zeolite families were evaluated in terms of radioactive uptake homogeneity, necessary to define activity and contour ground truth. Heterogeneous lesions were simulated by the perfect matching of two portions of a broken zeolite, soaked in two different 18 F-FDG radioactive solutions. Heterogeneous backgrounds were obtained with tissue paper balls and sponge pieces immersed into radioactive solutions. Natural clinoptilolite proved to be the most suitable zeolite for the construction of artificial objects mimicking homogeneous and heterogeneous uptakes in 18 F-FDG PET lesions. Heterogeneous backgrounds showed a coefficient of variation equal to 269% and 443% of a uniform radioactive solution. Assembled phantom included eight lesions with volumes ranging from 1.86 to 7.24 ml and lesion to background contrasts ranging from 4.8:1 to 21.7:1. A novel phantom for the evaluation of PET-AS algorithms was developed. It is provided with both reference contours and activity ground truth, and it covers a wide range of volumes and lesion to background contrasts. The dataset is open to the community of PET-AS developers and utilizers. © 2016 American Association of Physicists in Medicine.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
Causal Video Object Segmentation From Persistence of Occlusions
2015-05-01
Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics
A Supervised Approach to Windowing Detection on Dynamic Networks
2017-07-01
A supervised approach to windowing detection on dynamic networks Benjamin Fish University of Illinois at Chicago 1200 W. Harrison St. Chicago...Using this framework, we introduce windowing algorithms that take a supervised approach : they leverage ground truth on training data to find a good...windowing of the test data. We compare the supervised approach to previous approaches and several baselines on real data. ACM Reference format: Benjamin
NASA Ground-Truthing Capabilities Demonstrated
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Seibert, Marc A.
2004-01-01
NASA Research and Education Network (NREN) ground truthing is a method of verifying the scientific validity of satellite images and clarifying irregularities in the imagery. Ground-truthed imagery can be used to locate geological compositions of interest for a given area. On Mars, astronaut scientists could ground truth satellite imagery from the planet surface and then pinpoint optimum areas to explore. These astronauts would be able to ground truth imagery, get results back, and use the results during extravehicular activity without returning to Earth to process the data from the mission. NASA's first ground-truthing experiment, performed on June 25 in the Utah desert, demonstrated the ability to extend powerful computing resources to remote locations. Designed by Dr. Richard Beck of the Department of Geography at the University of Cincinnati, who is serving as the lead field scientist, and assisted by Dr. Robert Vincent of Bowling Green State University, the demonstration also involved researchers from the NASA Glenn Research Center and the NASA Ames Research Center, who worked with the university field scientists to design, perform, and analyze results of the experiment. As shown real-time Hyperion satellite imagery (data) is sent to a mass storage facility, while scientists at a remote (Utah) site upload ground spectra (data) to a second mass storage facility. The grid pulls data from both mass storage facilities and performs up to 64 simultaneous band ratio conversions on the data. Moments later, the results from the grid are accessed by local scientists and sent directly to the remote science team. The results are used by the remote science team to locate and explore new critical compositions of interest. The process can be repeated as required to continue to validate the data set or to converge on alternate geophysical areas of interest.
Morris, Alan; Burgon, Nathan; McGann, Christopher; MacLeod, Robert; Cates, Joshua
2013-01-01
Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar formation, which are important factors for predicting patient outcome and planning of redo ablation procedures. We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the performance of multivariate clustering on many combinations of image texture. Algorithm performance was determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth datasets considered easy, variability from the ground truth was low; for those considered difficult, variability from ground truth was on par with the variability across experts. PMID:24236224
NASA Astrophysics Data System (ADS)
Perry, Daniel; Morris, Alan; Burgon, Nathan; McGann, Christopher; MacLeod, Robert; Cates, Joshua
2012-03-01
Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar formation, which are important factors for predicting patient outcome and planning of redo ablation procedures. We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the performance of multivariate clustering on many combinations of image texture. Algorithm performance was determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth datasets considered easy, variability from the ground truth was low; for those considered difficult, variability from ground truth was on par with the variability across experts.
NASA Technical Reports Server (NTRS)
Freeman, Anthony; Dubois, Pascale; Leberl, Franz; Norikane, L.; Way, Jobea
1991-01-01
Viewgraphs on Geographic Information System for fusion and analysis of high-resolution remote sensing and ground truth data are presented. Topics covered include: scientific objectives; schedule; and Geographic Information System.
Methodology for Calculating Latency of GPS Probe Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhongxiang; Hamedi, Masoud; Young, Stanley
Crowdsourced GPS probe data, such as travel time on changeable-message signs and incident detection, have been gaining popularity in recent years as a source for real-time traffic information to driver operations and transportation systems management and operations. Efforts have been made to evaluate the quality of such data from different perspectives. Although such crowdsourced data are already in widespread use in many states, particularly the high traffic areas on the Eastern seaboard, concerns about latency - the time between traffic being perturbed as a result of an incident and reflection of the disturbance in the outsourced data feed - havemore » escalated in importance. Latency is critical for the accuracy of real-time operations, emergency response, and traveler information systems. This paper offers a methodology for measuring probe data latency regarding a selected reference source. Although Bluetooth reidentification data are used as the reference source, the methodology can be applied to any other ground truth data source of choice. The core of the methodology is an algorithm for maximum pattern matching that works with three fitness objectives. To test the methodology, sample field reference data were collected on multiple freeway segments for a 2-week period by using portable Bluetooth sensors as ground truth. Equivalent GPS probe data were obtained from a private vendor, and their latency was evaluated. Latency at different times of the day, impact of road segmentation scheme on latency, and sensitivity of the latency to both speed-slowdown and recovery-from-slowdown episodes are also discussed.« less
NASA Technical Reports Server (NTRS)
Anikouchine, W. A. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Radiance profiles drawn along cruise tracks have been examined for use in correlating digital radiance levels with ground truth data. Preliminary examination results are encouraging. Adding weighted levels from the 4 MSS bands appears to enhance specular surface reflections while rendering sensor noise white. Comparing each band signature to the added specular signature ought to enhance non-specular effects caused by ocean turbidity. Preliminary examination of radiance profiles and ground truth turbidity measurements revealed substantial correlation.
A Ground Truthing Method for AVIRIS Overflights Using Canopy Absorption Spectra
NASA Technical Reports Server (NTRS)
Gamon, John A.; Serrano, Lydia; Roberts, Dar A.; Ustin, Susan L.
1996-01-01
Remote sensing for ecological field studies requires ground truthing for accurate interpretation of remote imagery. However, traditional vegetation sampling methods are time consuming and hard to relate to the scale of an AVIRIS scene. The large errors associated with manual field sampling, the contrasting formats of remote and ground data, and problems with coregistration of field sites with AVIRIS pixels can lead to difficulties in interpreting AVIRIS data. As part of a larger study of fire risk in the Santa Monica Mountains of southern California, we explored a ground-based optical method of sampling vegetation using spectrometers mounted both above and below vegetation canopies. The goal was to use optical methods to provide a rapid, consistent, and objective means of "ground truthing" that could be related both to AVIRIS imagery and to conventional ground sampling (e.g., plot harvests and pigment assays).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Yang, Y; Young, L
Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesionsmore » featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus advised that use of PET texture parameters for predictive and prognostic measurements in oncologic setting awaits further systematic and critical evaluation.« less
Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.
2016-01-01
Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849
Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D
2008-01-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
As-built design specification for segment map (Sgmap) program
NASA Technical Reports Server (NTRS)
Tompkins, M. A. (Principal Investigator)
1981-01-01
The segment map program (SGMAP), which is part of the CLASFYT package, is described in detail. This program is designed to output symbolic maps or numerical dumps from LANDSAT cluster/classification files or aircraft ground truth/processed ground truth files which are in 'universal' format.
Investigations using data from LANDSAT 2. [Bangladesh
NASA Technical Reports Server (NTRS)
Hossain, A. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Ground truth data collected in coastal areas confirm the sedimentation base line at five fathom depth and less. Forestry ground truth at Supati in Sunderbasn was found to conform with stratifications in aerial photos and in some satellite imagery.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
Comparison of thyroid segmentation techniques for 3D ultrasound
NASA Astrophysics Data System (ADS)
Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.
2017-02-01
The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.
NASA Astrophysics Data System (ADS)
Bonaccorsi, R.; Stoker, C. R.; Marte Project Science Team
2007-03-01
The Mars Analog Rio Tinto Experiment (MARTE) performed a simulation of a Mars drilling experiment at the Rio Tinto (Spain). Ground-truth and contamination issues during the distribution of bulk organics and their CN isotopic composition in hematite and go
Detailed analysis of CAMS procedures for phase 3 using ground truth inventories
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1979-01-01
The results of a study of Procedure 1 as used during LACIE Phase 3 are presented. The study was performed by comparing the Procedure 1 classification results with digitized ground-truth inventories. The proportion estimation accuracy, dot labeling accuracy, and clustering effectiveness are discussed.
Ground Truth Studies. Teacher Handbook. Second Edition.
ERIC Educational Resources Information Center
Boyce, Jesse; And Others
Ground Truth Studies is an interdisciplinary activity-based program that draws on the broad range of sciences that make up the study of global change and the complementary technology of remote sensing. It integrates local environmental issues with global change topics, such as the greenhouse effect, loss of biological diversity, and ozone…
The NRL 2011 Airborne Sea-Ice Thickness Campaign
NASA Astrophysics Data System (ADS)
Brozena, J. M.; Gardner, J. M.; Liang, R.; Ball, D.; Richter-Menge, J.
2011-12-01
In March of 2011, the US Naval Research Laboratory (NRL) performed a study focused on the estimation of sea-ice thickness from airborne radar, laser and photogrammetric sensors. The study was funded by ONR to take advantage of the Navy's ICEX2011 ice-camp /submarine exercise, and to serve as a lead-in year for NRL's five year basic research program on the measurement and modeling of sea-ice scheduled to take place from 2012-2017. Researchers from the Army Cold Regions Research and Engineering Laboratory (CRREL) and NRL worked with the Navy Arctic Submarine Lab (ASL) to emplace a 9 km-long ground-truth line near the ice-camp (see Richter-Menge et al., this session) along which ice and snow thickness were directly measured. Additionally, US Navy submarines collected ice draft measurements under the groundtruth line. Repeat passes directly over the ground-truth line were flown and a grid surrounding the line was also flown to collect altimeter, LiDAR and Photogrammetry data. Five CRYOSAT-2 satellite tracks were underflown, as well, coincident with satellite passage. Estimates of sea ice thickness are calculated assuming local hydrostatic balance, and require the densities of water, ice and snow, snow depth, and freeboard (defined as the elevation of sea ice, plus accumulated snow, above local sea level). Snow thickness is estimated from the difference between LiDAR and radar altimeter profiles, the latter of which is assumed to penetrate any snow cover. The concepts we used to estimate ice thickness are similar to those employed in NASA ICEBRIDGE sea-ice thickness estimation. Airborne sensors used for our experiment were a Reigl Q-560 scanning topographic LiDAR, a pulse-limited (2 nS), 10 GHz radar altimeter and an Applanix DSS-439 digital photogrammetric camera (for lead identification). Flights were conducted on a Twin Otter aircraft from Pt. Barrow, AK, and averaged ~ 5 hours in duration. It is challenging to directly compare results from the swath LiDAR with the pulse-limited radar altimeter that has a footprint that varies from a few meters to a few tens of meters depending on altitude and roughness of the reflective surface. Intercalibration of the two instruments was accomplished at leads in the ice and by multiple over-flights of four radar corner-cubes set ~ 2 m above the snow along the ground-truth line. Direct comparison of successive flights of the ground-truth line to flights done in a grid pattern over and adjacent to the line was complicated by the ~ 20-30 m drift of the ice-floe between successive flight-lines. This rapid ice movement required the laser and radar data be translated into an ice-fixed, rather than a geographic reference frame. This was facilitated by geodetic GPS receiver measurements at the ice-camp and Pt. Barrow. The NRL data set, in combination with the ground-truth line and submarine upward-looking sonar data, will aid in understanding the error budgets of our systems, the ICEBRIDGE airborne measurements (also flown over the ground-truth line), and the CRYOSAT-2 data over a wide range of ice types.
Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.
Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong
2016-06-01
Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Crowd-sourced data collection to support automatic classification of building footprint data
NASA Astrophysics Data System (ADS)
Hecht, Robert; Kalla, Matthias; Krüger, Tobias
2018-05-01
Human settlements are mainly formed by buildings with their different characteristics and usage. Despite the importance of buildings for the economy and society, complete regional or even national figures of the entire building stock and its spatial distribution are still hardly available. Available digital topographic data sets created by National Mapping Agencies or mapped voluntarily through a crowd via Volunteered Geographic Information (VGI) platforms (e.g. OpenStreetMap) contain building footprint information but often lack additional information on building type, usage, age or number of floors. For this reason, predictive modeling is becoming increasingly important in this context. The capabilities of machine learning allow for the prediction of building types and other building characteristics and thus, the efficient classification and description of the entire building stock of cities and regions. However, such data-driven approaches always require a sufficient amount of ground truth (reference) information for training and validation. The collection of reference data is usually cost-intensive and time-consuming. Experiences from other disciplines have shown that crowdsourcing offers the possibility to support the process of obtaining ground truth data. Therefore, this paper presents the results of an experimental study aiming at assessing the accuracy of non-expert annotations on street view images collected from an internet crowd. The findings provide the basis for a future integration of a crowdsourcing component into the process of land use mapping, particularly the automatic building classification.
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1978-01-01
Procedures for gathering ground truth information for a supervised approach to a computer-implemented land cover classification of LANDSAT acquired multispectral scanner data are provided in a step by step manner. Criteria for determining size, number, uniformity, and predominant land cover of training sample sites are established. Suggestions are made for the organization and orientation of field team personnel, the procedures used in the field, and the format of the forms to be used. Estimates are made of the probable expenditures in time and costs. Examples of ground truth forms and definitions and criteria of major land cover categories are provided in appendixes.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
NASA Technical Reports Server (NTRS)
1974-01-01
Varied small scale imagery was used for detecting and assessing damage by the southern pine beetle. The usefulness of ERTS scanner imagery for vegetation classification and pine beetle damage detection and assessment is evaluated. Ground truth acquisition for forest identification using multispectral aerial photographs is reviewed.
NASA Astrophysics Data System (ADS)
Salamuniccar, G.; Loncaric, S.
2008-03-01
The Catalogue from our previous work was merged with the date of Barlow, Rodionova, Boyce, and Kuzmin. The resulting ground truth catalogue with 57,633 craters was registered, using MOLA data, with THEMIS-DIR, MDIM, and MOC data-sets.
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Attractor learning in synchronized chaotic systems in the presence of unresolved scales
NASA Astrophysics Data System (ADS)
Wiegerinck, W.; Selten, F. M.
2017-12-01
Recently, supermodels consisting of an ensemble of interacting models, synchronizing on a common solution, have been proposed as an alternative to the common non-interactive multi-model ensembles in order to improve climate predictions. The connection terms in the interacting ensemble are to be optimized based on the data. The supermodel approach has been successfully demonstrated in a number of simulation experiments with an assumed ground truth and a set of good, but imperfect models. The supermodels were optimized with respect to their short-term prediction error. Nevertheless, they produced long-term climatological behavior that was close to the long-term behavior of the assumed ground truth, even in cases where the long-term behavior of the imperfect models was very different. In these supermodel experiments, however, a perfect model class scenario was assumed, in which the ground truth and imperfect models belong to the same model class and only differ in parameter setting. In this paper, we consider the imperfect model class scenario, in which the ground truth model class is more complex than the model class of imperfect models due to unresolved scales. We perform two supermodel experiments in two toy problems. The first one consists of a chaotically driven Lorenz 63 oscillator ground truth and two Lorenz 63 oscillators with constant forcings as imperfect models. The second one is more realistic and consists of a global atmosphere model as ground truth and imperfect models that have perturbed parameters and reduced spatial resolution. In both problems, we find that supermodel optimization with respect to short-term prediction error can lead to a long-term climatological behavior that is worse than that of the imperfect models. However, we also show that attractor learning can remedy this problem, leading to supermodels with long-term behavior superior to the imperfect models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. BEGNAUD; ET AL
2000-09-01
Obtaining accurate seismic event locations is one of the most important goals for monitoring detonations of underground nuclear teats. This is a particular challenge at small magnitudes where the number of recording stations may be less than 20. Although many different procedures are being developed to improve seismic location, most procedures suffer from inadequate testing against accurate information about a seismic event. Events with well-defined attributes, such as latitude, longitude, depth and origin time, are commonly referred to as ground truth (GT). Ground truth comes in many forms and with many different levels of accuracy. Interferometric Synthetic Aperture Radar (InSAR)more » can provide independent and accurate information (ground truth) regarding ground surface deformation and/or rupture. Relating surface deformation to seismic events is trivial when events are large and create a significant surface rupture, such as for the M{sub w} = 7.5 event that occurred in the remote northern region of the Tibetan plateau in 1997. The event, which was a vertical strike slip even appeared anomalous in nature due to the lack of large aftershocks and had an associated surface rupture of over 180 km that was identified and modeled using InSAR. The east-west orientation of the fault rupture provides excellent ground truth for latitude, but is of limited use for longitude. However, a secondary rupture occurred 50 km south of the main shock rupture trace that can provide ground truth with accuracy within 5 km. The smaller, 5-km-long secondary rupture presents a challenge for relating the deformation to a seismic event. The rupture is believed to have a thrust mechanism; the dip of the fimdt allows for some separation between the secondary rupture trace and its associated event epicenter, although not as much as is currently observed from catalog locations. Few events within the time period of the InSAR analysis are candidates for the secondary rupture. Of these, we have identified six possible secondary rupture events (mb range = 3.7-4.8, with two magnitudes not reported), based on synthetic tests and residual analysis. All of the candidate events are scattered about the main and secondary rupture. A Joint Hypocenter Determination (JHD) approach applied to the aftershocks using global picks was not able to identify the secondary event. We added regional data and used propagation path corrections to reduce scatter and remove the 20-km bias seen in the main shock location. A&r preliminary analysis using several different velocity models, none of the candidate events proved to relocate on the surface trace of the secondary rupture. However, one event (mb = not reported) moved from a starting distance of {approximately}106 km to a relocated distance of {approximately}28 km from the secondary rupture, the only candidate event to relocate in relative proximity to the secondary rupture.« less
The Need for Careful Data Collection for Pattern Recognition in Digital Pathology.
Marée, Raphaël
2017-01-01
Effective pattern recognition requires carefully designed ground-truth datasets. In this technical note, we first summarize potential data collection issues in digital pathology and then propose guidelines to build more realistic ground-truth datasets and to control their quality. We hope our comments will foster the effective application of pattern recognition approaches in digital pathology.
USDA-ARS?s Scientific Manuscript database
Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that recent ground truth data could be used to extrapolate pre...
Leibniz on the metaphysical foundation of physics
NASA Astrophysics Data System (ADS)
Temple, Daniel R.
This thesis examines how and why Leibniz felt that physics must be grounded in metaphysics. I argue that one of the strongest motivation Leibniz had for attempting to ground physics in metaphysics was his concern over the problem of induction. Even in his early writings, Leibniz was well aware of the problem of induction and how this problem threatened the very possibility of physics. Both his early and later theories of truth are geared towards solving this deep problem in the philosophy of science. In his early theory of truth, all truths are ultimately grounded in (but not necessarily reducible to) an identity. Hence, all truths are ultimately based in logic. Consequently, the problem of induction is seemingly solved since everything that happens, happens with the force of logical necessity. Unfortunately, this theory is incompatible with Leibniz's theory of possible worlds and hence, jeopardizes the liberty of God. In Leibniz's later theory of truth, Leibniz tries to overcome this weakness by acknowledging truths that are grounded in the free but moral necessity of God's actions. Since God's benevolence is responsible for the actualization of this world, then this world must possess rational laws. Furthermore, since God's rationality ensures that everything obeys the principle of sufficient reason, then we can use this principle to determine the fundamental laws of the universe. Leibniz himself attempts to derive these laws using this principle. Kant attempted to continue this work of securing the possibility of science, and the problems he encountered helped to shape his critical philosophy. So I conclude by a comparative analysis of Leibniz and Kant on the foundations of physics.
Unmanned aerial vehicle-based structure from motion biomass inventory estimates
NASA Astrophysics Data System (ADS)
Bedell, Emily; Leslie, Monique; Fankhauser, Katie; Burnett, Jonathan; Wing, Michael G.; Thomas, Evan A.
2017-04-01
Riparian vegetation restoration efforts require cost-effective, accurate, and replicable impact assessments. We present a method to use an unmanned aerial vehicle (UAV) equipped with a GoPro digital camera to collect photogrammetric data of a 0.8-ha riparian restoration. A three-dimensional point cloud was created from the photos using "structure from motion" techniques. The point cloud was analyzed and compared to traditional, ground-based monitoring techniques. Ground-truth data were collected on 6.3% of the study site and averaged across the entire site to report stem heights in stems/ha in three height classes. The project site was divided into four analysis sections, one for derivation of parameters used in the UAV data analysis and the remaining three sections reserved for method validation. Comparing the ground-truth data to the UAV generated data produced an overall error of 21.6% and indicated an R2 value of 0.98. A Bland-Altman analysis indicated a 95% probability that the UAV stems/section result will be within 61 stems/section of the ground-truth data. The ground-truth data are reported with an 80% confidence interval of ±1032 stems/ha thus, the UAV was able to estimate stems well within this confidence interval.
Aeolian dunes as ground truth for atmospheric modeling on Mars
Hayward, R.K.; Titus, T.N.; Michaels, T.I.; Fenton, L.K.; Colaprete, A.; Christensen, P.R.
2009-01-01
Martian aeolian dunes preserve a record of atmosphere/surface interaction on a variety of scales, serving as ground truth for both Global Climate Models (GCMs) and mesoscale climate models, such as the Mars Regional Atmospheric Modeling System (MRAMS). We hypothesize that the location of dune fields, expressed globally by geographic distribution and locally by dune centroid azimuth (DCA), may record the long-term integration of atmospheric activity across a broad area, preserving GCM-scale atmospheric trends. In contrast, individual dune morphology, as expressed in slipface orientation (SF), may be more sensitive to localized variations in circulation, preserving topographically controlled mesoscale trends. We test this hypothesis by comparing the geographic distribution, DCA, and SF of dunes with output from the Ames Mars GCM and, at a local study site, with output from MRAMS. When compared to the GCM: 1) dunes generally lie adjacent to areas with strongest winds, 2) DCA agrees fairly well with GCM modeled wind directions in smooth-floored craters, and 3) SF does not agree well with GCM modeled wind directions. When compared to MRAMS modeled winds at our study site: 1) DCA generally coincides with the part of the crater where modeled mean winds are weak, and 2) SFs are consistent with some weak, topographically influenced modeled winds. We conclude that: 1) geographic distribution may be valuable as ground truth for GCMs, 2) DCA may be useful as ground truth for both GCM and mesoscale models, and 3) SF may be useful as ground truth for mesoscale models. Copyright 2009 by the American Geophysical Union.
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Soil moisture ground truth: Steamboat Springs, Colorado, site and Walden, Colorado, site
NASA Technical Reports Server (NTRS)
Jones, E. B.
1976-01-01
Ground-truth data taken at Steamboat Springs and Walden, Colorado in support of the NASA missions in these areas during the period March 8, 1976 through March 11, 1976 was presented. This includes the following information: snow course data for Steamboat Springs and Walden, snow pit and snow quality data for Steamboat Springs, and soil moisture report.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
NASA Technical Reports Server (NTRS)
Smith, R.
1975-01-01
Wallops Station accepted the tasks of providing ground truth to several ERTS investigators, operating a DCP repair depot, designing and building an airborne DCP Data Acquisition System, and providing aircraft underflight support for several other investigators. Additionally, the data bank is generally available for use by ERTS and other investigators that have a scientific interest in data pertaining to the Chesapeake Bay area. Working with DCS has provided a means of evaluating the system as a data collection device possibly applicable to ongoing Earth Resources Program activities in the Chesapeake Bay area as well as providing useful data and services to other ERTS investigators. The two areas of technical support provided by Wallops, ground truth stations and repair for DCPs, are briefly discussed.
Truth-Telling, Ritual Culture, and Latino College Graduates in the Anthropocene
ERIC Educational Resources Information Center
Gildersleeve, Ryan Evely
2017-01-01
This article seeks to trace the cartography of truth-telling through a posthuamanist predicament of ritual culture in higher education and critical inquiry. Ritual culture in higher education such as graduation ceremony produces and reflects the realities of becoming subjects. These spaces are proliferating grounds for truth telling and practical…
Coarse Scale In Situ Albedo Observations over Heterogeneous Land Surfaces and Validation Strategy
NASA Astrophysics Data System (ADS)
Xiao, Q.; Wu, X.; Wen, J.; BAI, J., Sr.
2017-12-01
To evaluate and improve the quality of coarse-pixel land surface albedo products, validation with ground measurements of albedo is crucial over the spatially and temporally heterogeneous land surface. The performance of albedo validation depends on the quality of ground-based albedo measurements at a corresponding coarse-pixel scale, which can be conceptualized as the "truth" value of albedo at coarse-pixel scale. The wireless sensor network (WSN) technology provides access to continuously observe on the large pixel scale. Taking the albedo products as an example, this paper was dedicated to the validation of coarse-scale albedo products over heterogeneous surfaces based on the WSN observed data, which is aiming at narrowing down the uncertainty of results caused by the spatial scaling mismatch between satellite and ground measurements over heterogeneous surfaces. The reference value of albedo at coarse-pixel scale can be obtained through an upscaling transform function based on all of the observations for that pixel. We will devote to further improve and develop new method that that are better able to account for the spatio-temporal characteristic of surface albedo in the future. Additionally, how to use the widely distributed single site measurements over the heterogeneous surfaces is also a question to be answered. Keywords: Remote sensing; Albedo; Validation; Wireless sensor network (WSN); Upscaling; Heterogeneous land surface; Albedo truth at coarse-pixel scale
Taxonomy of USA east coast fishing communities in terms of social vulnerability and resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollnac, Richard B., E-mail: pollnac3@gmail.com; Seara, Tarsila, E-mail: tarsila.seara@noaa.gov; Colburn, Lisa L., E-mail: lisa.l.colburn@noaa.gov
Increased concern with the impacts that changing coastal environments can have on coastal fishing communities led to a recent effort by NOAA Fisheries social scientists to develop a set of indicators of social vulnerability and resilience for the U.S. Southeast and Northeast coastal communities. A goal of the NOAA Fisheries social vulnerability and resilience indicator program is to support time and cost effective use of readily available data in furtherance of both social impact assessments of proposed changes to fishery management regulations and climate change adaptation planning. The use of the indicators to predict the response to change in coastalmore » communities would be enhanced if community level analyses could be grouped effectively. This study examines the usefulness of combining 1130 communities into 35 relevant subgroups by comparing results of a numerical taxonomy with data collected by interview methods, a process herein referred to as “ground-truthing.” The validation of the taxonomic method by the method of ground-truthing indicates that the clusters are adequate to be used to select communities for in-depth research. - Highlights: • We develop a taxonomy of fishing communities based on vulnerability indicators. • We validate the community clusters through the use of surveys (“ground-truthing”). • Clusters differ along important aspects of fishing community vulnerability. • Clustering communities allows for accurate and timely social impact assessments.« less
NASA Technical Reports Server (NTRS)
Edwardo, H. A.; Moulis, F. R.; Merry, C. J.; Mckim, H. L.; Kerber, A. G.; Miller, M. A.
1985-01-01
The Pittsburgh District, Corps of Engineers, has conducted feasibility analyses of various procedures for performing flood damage assessments along the main stem of the Ohio River. Procedures using traditional, although highly automated, techniques and those based on geographic information systems have been evaluated at a test site, the City of New Martinsville, Wetzel County, WV. The flood damage assessments of the test site developed from an automated, conventional structure-by-structure appraisal served as the ground truth data set. A geographic information system was developed for the test site which includes data on hydraulic reach, ground and reference flood elevations, and land use/cover. Damage assessments were made using land use mapping developed from an exhaustive field inspection of each tax parcel. This ground truth condition was considered to provide the best comparison of flood damages to the conventional approach. Also, four land use/cover data sets were developed from Thematic Mapper Simulator (TMS) and Landsat-4 Thematic Mapper (TM) data. One of these was also used to develop a damage assessment of the test site. This paper presents the comparative absolute and relative accuracies of land use/cover mapping and flood damage assessments, and the recommended role of geographic information systems aided by remote sensing for conducting flood damage assessments and updates along the main stem of the Ohio River.
Evaluation criteria for software classification inventories, accuracies, and maps
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.
Severe Thunderstorm and Tornado Warnings at Raleigh, North Carolina.
NASA Astrophysics Data System (ADS)
Hoium, Debra K.; Riordan, Allen J.; Monahan, John; Keeter, Kermit K.
1997-11-01
The National Weather Service issues public warnings for severe thunderstorms and tornadoes when these storms appear imminent. A study of the warning process was conducted at the National Weather Service Forecast Office at Raleigh, North Carolina, from 1994 through 1996. The purpose of the study was to examine the decision process by documenting the types of information leading to decisions to warn or not to warn and by describing the sequence and timing of events in the development of warnings. It was found that the evolution of warnings followed a logical sequence beginning with storm monitoring and proceeding with increasingly focused activity. For simplicity, information input to the process was categorized as one of three types: ground truth, radar reflectivity, or radar velocity.Reflectivity, velocity, and ground truth were all equally likely to initiate the investigation process. This investigation took an average of 7 min, after which either a decision was made not to warn or new information triggered the warning. Decisions not to issue warnings were based more on ground truth and reflectivity than radar velocity products. Warnings with investigations of more than 2 min were more likely to be triggered by radar reflectivity, than by velocity or ground truth. Warnings with a shorter investigation time, defined here as "immediate trigger warnings," were less frequently based on velocity products and more on ground truth information. Once the decision was made to warn, it took an average of 2.1 min to prepare the warning text. In 85% of cases when warnings were issued, at least one contact was made to emergency management officials or storm spotters in the warned county. Reports of severe weather were usually received soon after the warning was transmitted-almost half of these within 30 min after issue. A total of 68% were received during the severe weather episode, but some of these storm reports later proved false according to Storm Data.Even though the WSR-88D is a sophisticated tool, ground truth information was found to be a vital part of the warning process. However, the data did not indicate that population density was statistically correlated either with the number of warnings issued or the verification rate.
NASA Astrophysics Data System (ADS)
Matsumoto, M.; Yoshimura, M.; Naoki, K.; Cho, K.; Wakabayashi, H.
2018-04-01
Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR) can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately) aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.
Blecha, Kevin A.; Alldredge, Mat W.
2015-01-01
Animal space use studies using GPS collar technology are increasingly incorporating behavior based analysis of spatio-temporal data in order to expand inferences of resource use. GPS location cluster analysis is one such technique applied to large carnivores to identify the timing and location of feeding events. For logistical and financial reasons, researchers often implement predictive models for identifying these events. We present two separate improvements for predictive models that future practitioners can implement. Thus far, feeding prediction models have incorporated a small range of covariates, usually limited to spatio-temporal characteristics of the GPS data. Using GPS collared cougar (Puma concolor) we include activity sensor data as an additional covariate to increase prediction performance of feeding presence/absence. Integral to the predictive modeling of feeding events is a ground-truthing component, in which GPS location clusters are visited by human observers to confirm the presence or absence of feeding remains. Failing to account for sources of ground-truthing false-absences can bias the number of predicted feeding events to be low. Thus we account for some ground-truthing error sources directly in the model with covariates and when applying model predictions. Accounting for these errors resulted in a 10% increase in the number of clusters predicted to be feeding events. Using a double-observer design, we show that the ground-truthing false-absence rate is relatively low (4%) using a search delay of 2–60 days. Overall, we provide two separate improvements to the GPS cluster analysis techniques that can be expanded upon and implemented in future studies interested in identifying feeding behaviors of large carnivores. PMID:26398546
Lunar Reference Suite to Support Instrument Development and Testing
NASA Technical Reports Server (NTRS)
Allen, Carlton; Sellar, Glenn; Nunez, Jorge I.; Winterhalter, Daniel; Farmer, Jack
2010-01-01
Astronauts on long-duration lunar missions will need the capability to "high-grade" their samples - to select the highest value samples for transport to Earth - and to leave others on the Moon. Instruments that may be useful for such high-grading are under development. Instruments are also being developed for possible use on future lunar robotic landers, for lunar field work, and for more sophisticated analyses at a lunar outpost. The Johnson Space Center Astromaterials acquisition and Curation Office (JSC Curation) wll support such instrument testing by providing lunar sample "ground truth".
Rossen, Lauren M; Pollack, Keshia M; Curriero, Frank C
2012-09-01
Obtaining valid and accurate data on community food environments is critical for research evaluating associations between the food environment and health outcomes. This study utilized ground-truthing and remote-sensing technology to validate a food outlet retail list obtained from an urban local health department in Baltimore, Maryland in 2009. Ten percent of outlets (n=169) were assessed, and differences in accuracy were explored by neighborhood characteristics (96 census tracts) to determine if discrepancies were differential or non-differential. Inaccuracies were largely unrelated to a variety of neighborhood-level variables, with the exception of number of vacant housing units. Although remote-sensing technologies are a promising low-cost alternative to direct observation, this study demonstrated only moderate levels of agreement with ground-truthing. Published by Elsevier Ltd.
The conflict between science and religion: a discussion on the possibilities for settlement
NASA Astrophysics Data System (ADS)
Falcão, Eliane Brigida Morais
2010-03-01
In his article Skepticism, truth as coherence, and construtivist epistemology: grounds for resolving the discord between science and religion?, John Staver identifies what he considers to be the source of the conflicts between science and religion: the establishment of the relationship between truth and knowledge, from the perspective of those who see a correspondence between reality and knowledge, assumed in the realm of both contending fields. In the present work, although agreeing with the general principles of constructivism, I discuss Staver's option of viewing truth as coherence and his proposal of renouncing the objective of knowing reality from both fields' perspective. Three aspects of Staver's article are commented and discussed here: the one referring to views of reality or of nature as shared by scientists; the one concerning the different forms of religious beliefs among scientists; and the one accounting for the impossibility, from the standpoint of constructivism, of determining limits to the objectives of science and religion. Also emphasized in this discussion is the importance of combining theoretical and methodological approaches in tune with the complexity of the theme under discussion, accounting for the need to preserve the freedom of thinking and of doing research.
The importance of ground truth data in remote sensing
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1972-01-01
Surface observation data is discussed as an essential part of remote sensing research. One of the most important aspects of ground truth is the collection of measurements and observations about the type, size, condition and other physical or chemical properties of importance concerning the materials on the earth's surface that are being sensed remotely. The use of a variety of sensor systems in combination at different altitudes is emphasized.
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Software Suite to Support In-Flight Characterization of Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Holekamp, Kara; Gasser, Gerald; Tabor, Wes; Vaughan, Ronald; Ryan, Robert; Pagnutti, Mary; Blonski, Slawomir; Kenton, Ross
2014-01-01
A characterization software suite was developed to facilitate NASA's in-flight characterization of commercial remote sensing systems. Characterization of aerial and satellite systems requires knowledge of ground characteristics, or ground truth. This information is typically obtained with instruments taking measurements prior to or during a remote sensing system overpass. Acquired ground-truth data, which can consist of hundreds of measurements with different data formats, must be processed before it can be used in the characterization. Accurate in-flight characterization of remote sensing systems relies on multiple field data acquisitions that are efficiently processed, with minimal error. To address the need for timely, reproducible ground-truth data, a characterization software suite was developed to automate the data processing methods. The characterization software suite is engineering code, requiring some prior knowledge and expertise to run. The suite consists of component scripts for each of the three main in-flight characterization types: radiometric, geometric, and spatial. The component scripts for the radiometric characterization operate primarily by reading the raw data acquired by the field instruments, combining it with other applicable information, and then reducing it to a format that is appropriate for input into MODTRAN (MODerate resolution atmospheric TRANsmission), an Air Force Research Laboratory-developed radiative transport code used to predict at-sensor measurements. The geometric scripts operate by comparing identified target locations from the remote sensing image to known target locations, producing circular error statistics defined by the Federal Geographic Data Committee Standards. The spatial scripts analyze a target edge within the image, and produce estimates of Relative Edge Response and the value of the Modulation Transfer Function at the Nyquist frequency. The software suite enables rapid, efficient, automated processing of ground truth data, which has been used to provide reproducible characterizations on a number of commercial remote sensing systems. Overall, this characterization software suite improves the reliability of ground-truth data processing techniques that are required for remote sensing system in-flight characterizations.
ERIC Educational Resources Information Center
Sadd, James; Morello-Frosch, Rachel; Pastor, Manuel; Matsuoka, Martha; Prichard, Michele; Carter, Vanessa
2014-01-01
Environmental justice advocates often argue that environmental hazards and their health effects vary by neighborhood, income, and race. To assess these patterns and advance preventive policy, their colleagues in the research world often use complex and methodologically sophisticated statistical and geospatial techniques. One way to bridge the gap…
Gold standards and expert panels: a pulmonary nodule case study with challenges and solutions
NASA Astrophysics Data System (ADS)
Miller, Dave P.; O'Shaughnessy, Kathryn F.; Wood, Susan A.; Castellino, Ronald A.
2004-05-01
Comparative evaluations of reader performance using different modalities, e.g. CT with computer-aided detection (CAD) vs. CT without CAD, generally require a "truth" definition based on a gold standard. There are many situations in which a true invariant gold standard is impractical or impossible to obtain. For instance, small pulmonary nodules are generally not assessed by biopsy or resection. In such cases, it is common to use a unanimous consensus or majority agreement from an expert panel as a reference standard for actionability in lieu of the unknown gold standard for disease. Nonetheless, there are three major concerns about expert panel reference standards: (1) actionability is not synonymous with disease (2) it may be possible to obtain different conclusions about which modality is better using different rules (e.g. majority vs. unanimous consensus), and (3) the variability associated with the panelists is not formally captured in the p-values or confidence intervals that are generally produced for estimating the extent to which one modality is superior to the other. A multi-reader-multi-case (MRMC) receiver operating characteristic (ROC) study was performed using 90 cases, 15 readers, and a reference truth based on 3 experienced panelists. The primary analyses were conducted using a reference truth of unanimous consensus regarding actionability (3 out of 3 panelists). To assess the three concerns noted above: (1) additional data from the original radiology reports were compared to the panel (2) the complete analysis was repeated using different definitions of truth, and (3) bootstrap analyses were conducted in which new truth panels were constructed by picking 1, 2, or 3 panelists at random. The definition of the reference truth affected the results for each modality (CT with CAD and CT without CAD) considered by itself, but the effects were similar, so the primary analysis comparing the modalities was robust to the choice of the reference truth.
Truth in Testing Legislation and Private Property Concepts.
ERIC Educational Resources Information Center
Burns, Daniel J.
1981-01-01
Truth in testing laws are subject to challenge on the grounds that they invade federally protected rights and interests of the test-makers through the due process clauses of the Constitution and federal copyright protections. (Author/MLF)
GEOS-3 phase B ground truth summary
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Goodman, L. R.
1975-01-01
Ground truth data collected during the experiment systems calibration and evaluation phase of the Geodynamics experimental Ocean Satellite (GEOS-3) experiment are summarized. Both National Weather Service analyses and aircraft sensor data are included. The data are structured to facilitate the use of the various data products in calibrating the GEOS-3 radar altimeter and in assessing the altimeter's sensitivity to geophysical phenomena. Brief statements are made concerning the quality and completeness of the included data.
The challenge of mapping the human connectome based on diffusion tractography.
Maier-Hein, Klaus H; Neher, Peter F; Houde, Jean-Christophe; Côté, Marc-Alexandre; Garyfallidis, Eleftherios; Zhong, Jidan; Chamberland, Maxime; Yeh, Fang-Cheng; Lin, Ying-Chia; Ji, Qing; Reddick, Wilburn E; Glass, John O; Chen, David Qixiang; Feng, Yuanjing; Gao, Chengfeng; Wu, Ye; Ma, Jieyan; Renjie, H; Li, Qiang; Westin, Carl-Fredrik; Deslauriers-Gauthier, Samuel; González, J Omar Ocegueda; Paquette, Michael; St-Jean, Samuel; Girard, Gabriel; Rheault, François; Sidhu, Jasmeen; Tax, Chantal M W; Guo, Fenghua; Mesri, Hamed Y; Dávid, Szabolcs; Froeling, Martijn; Heemskerk, Anneriet M; Leemans, Alexander; Boré, Arnaud; Pinsard, Basile; Bedetti, Christophe; Desrosiers, Matthieu; Brambati, Simona; Doyon, Julien; Sarica, Alessia; Vasta, Roberta; Cerasa, Antonio; Quattrone, Aldo; Yeatman, Jason; Khan, Ali R; Hodges, Wes; Alexander, Simon; Romascano, David; Barakovic, Muhamed; Auría, Anna; Esteban, Oscar; Lemkaddem, Alia; Thiran, Jean-Philippe; Cetingul, H Ertan; Odry, Benjamin L; Mailhe, Boris; Nadar, Mariappan S; Pizzagalli, Fabrizio; Prasad, Gautam; Villalon-Reina, Julio E; Galvis, Justin; Thompson, Paul M; Requejo, Francisco De Santiago; Laguna, Pedro Luque; Lacerda, Luis Miguel; Barrett, Rachel; Dell'Acqua, Flavio; Catani, Marco; Petit, Laurent; Caruyer, Emmanuel; Daducci, Alessandro; Dyrby, Tim B; Holland-Letz, Tim; Hilgetag, Claus C; Stieltjes, Bram; Descoteaux, Maxime
2017-11-07
Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.
Radar modeling of a boreal forest
NASA Technical Reports Server (NTRS)
Chauhan, Narinder S.; Lang, Roger H.; Ranson, K. J.
1991-01-01
Microwave modeling, ground truth, and SAR data are used to investigate the characteristics of forest stands. A mixed coniferous forest stand has been modeled at P, L, and C bands. Extensive measurements of ground truth and canopy geometry parameters were performed in a 200-m-square hemlock-dominated forest plot. About 10 percent of the trees were sampled to determine a distribution of diameter at breast height (DBH). Hemlock trees in the forest are modeled by characterizing tree trunks, branches, and needles as randomly oriented lossy dielectric cylinders whose area and orientation distributions are prescribed. The distorted Born approximation is used to compute the backscatter at P, L, and C bands. The theoretical results are found to be lower than the calibrated ground-truth data. The experiment and model results agree quite closely, however, when the ratios of VV to HH and HV to HH are compared.
Interactive degraded document enhancement and ground truth generation
NASA Astrophysics Data System (ADS)
Bal, G.; Agam, G.; Frieder, O.; Frieder, G.
2008-01-01
Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1
Using Ground-Based Measurements and Retrievals to Validate Satellite Data
NASA Technical Reports Server (NTRS)
Dong, Xiquan
2002-01-01
The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.
The gene normalization task in BioCreative III
2011-01-01
Background We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). Results We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. Conclusions By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance. PMID:22151901
The gene normalization task in BioCreative III.
Lu, Zhiyong; Kao, Hung-Yu; Wei, Chih-Hsuan; Huang, Minlie; Liu, Jingchen; Kuo, Cheng-Ju; Hsu, Chun-Nan; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Okazaki, Naoaki; Cho, Han-Cheol; Gerner, Martin; Solt, Illes; Agarwal, Shashank; Liu, Feifan; Vishnyakova, Dina; Ruch, Patrick; Romacker, Martin; Rinaldi, Fabio; Bhattacharya, Sanmitra; Srinivasan, Padmini; Liu, Hongfang; Torii, Manabu; Matos, Sergio; Campos, David; Verspoor, Karin; Livingston, Kevin M; Wilbur, W John
2011-10-03
We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
NASA Astrophysics Data System (ADS)
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-01
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-21
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-08-31
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.
Digital image analysis in pathology: benefits and obligation.
Laurinavicius, Arvydas; Laurinaviciene, Aida; Dasevicius, Darius; Elie, Nicolas; Plancoulaine, Benoît; Bor, Catherine; Herlin, Paulette
2012-01-01
Pathology has recently entered the era of personalized medicine. This brings new expectations for the accuracy and precision of tissue-based diagnosis, in particular, when quantification of histologic features and biomarker expression is required. While for many years traditional pathologic diagnosis has been regarded as ground truth, this concept is no longer sufficient in contemporary tissue-based biomarker research and clinical use. Another major change in pathology is brought by the advancement of virtual microscopy technology enabling digitization of microscopy slides and presenting new opportunities for digital image analysis. Computerized vision provides an immediate benefit of increased capacity (automation) and precision (reproducibility), but not necessarily the accuracy of the analysis. To achieve the benefit of accuracy, pathologists will have to assume an obligation of validation and quality assurance of the image analysis algorithms. Reference values are needed to measure and control the accuracy. Although pathologists' consensus values are commonly used to validate these tools, we argue that the ground truth can be best achieved by stereology methods, estimating the same variable as an algorithm is intended to do. Proper adoption of the new technology will require a new quantitative mentality in pathology. In order to see a complete and sharp picture of a disease, pathologists will need to learn to use both their analogue and digital eyes.
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-01-01
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768
Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.
Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal
2013-08-01
Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.
An algorithm for calculi segmentation on ureteroscopic images.
Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme
2011-03-01
The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.
2012-05-07
AFRL-RV-PS- AFRL-RV-PS- TP-2012-0017 TP-2012-0017 MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC...ASSOCIATION AND LOCATION OF 5a. CONTRACT NUMBER FA8718-08-C-0008 INFRASOUND AND SEISMO-ACOUSTIC EVENT – UTILIZATION OF GROUND-TRUTH... infrasound signals from both correlated and uncorrelated noise. Approaches to this problem are implementation of the F-detector, which employs the F
Johnson, Cordell; Swarzenski, Peter W.; Richardson, Christina M.; Smith, Christopher G.; Kroeger, Kevin D.; Ganguli, Priya M.
2015-01-01
Rigorous ground-truthing at each field site showed that multi-channel electrcial resistivity techniques can reproduce the scales and dynamics of a seepage field when such data are correctly collected, and when the model inversions are tuned to field site characteristics. Such information can provide a unique perspective on the scales and dynamics of exchange processes within a coastal aquifer—information essential to scientists and resource managers alike.
Reanimating patients: cardio-respiratory CT and MR motion phantoms based on clinical CT patient data
NASA Astrophysics Data System (ADS)
Mayer, Johannes; Sauppe, Sebastian; Rank, Christopher M.; Sawall, Stefan; Kachelrieß, Marc
2017-03-01
Until today several algorithms have been developed that reduce or avoid artifacts caused by cardiac and respiratory motion in computed tomography (CT). The motion information is converted into so-called motion vector fields (MVFs) and used for motion compensation (MoCo) during the image reconstruction. To analyze these algorithms quantitatively there is the need for ground truth patient data displaying realistic motion. We developed a method to generate a digital ground truth displaying realistic cardiac and respiratory motion that can be used as a tool to assess MoCo algorithms. By the use of available MoCo methods we measured the motion in CT scans with high spatial and temporal resolution and transferred the motion information onto patient data with different anatomy or imaging modality, thereby reanimating the patient virtually. In addition to these images the ground truth motion information in the form of MVFs is available and can be used to benchmark the MVF estimation of MoCo algorithms. We here applied the method to generate 20 CT volumes displaying detailed cardiac motion that can be used for cone-beam CT (CBCT) simulations and a set of 8 MR volumes displaying respiratory motion. Our method is able to reanimate patient data virtually. In combination with the MVFs it serves as a digital ground truth and provides an improved framework to assess MoCo algorithms.
Hertanto, Agung; Zhang, Qinghui; Hu, Yu-Chi; Dzyubak, Oleksandr; Rimner, Andreas; Mageras, Gig S
2012-06-01
Respiration-correlated CT (RCCT) images produced with commonly used phase-based sorting of CT slices often exhibit discontinuity artifacts between CT slices, caused by cycle-to-cycle amplitude variations in respiration. Sorting based on the displacement of the respiratory signal yields slices at more consistent respiratory motion states and hence reduces artifacts, but missing image data (gaps) may occur. The authors report on the application of a respiratory motion model to produce an RCCT image set with reduced artifacts and without missing data. Input data consist of CT slices from a cine CT scan acquired while recording respiration by monitoring abdominal displacement. The model-based generation of RCCT images consists of four processing steps: (1) displacement-based sorting of CT slices to form volume images at 10 motion states over the cycle; (2) selection of a reference image without gaps and deformable registration between the reference image and each of the remaining images; (3) generation of the motion model by applying a principal component analysis to establish a relationship between displacement field and respiration signal at each motion state; (4) application of the motion model to deform the reference image into images at the 9 other motion states. Deformable image registration uses a modified fast free-form algorithm that excludes zero-intensity voxels, caused by missing data, from the image similarity term in the minimization function. In each iteration of the minimization, the displacement field in the gap regions is linearly interpolated from nearest neighbor nonzero intensity slices. Evaluation of the model-based RCCT examines three types of image sets: cine scans of a physical phantom programmed to move according to a patient respiratory signal, NURBS-based cardiac torso (NCAT) software phantom, and patient thoracic scans. Comparison in physical motion phantom shows that object distortion caused by variable motion amplitude in phase-based sorting is visibly reduced with model-based RCCT. Comparison of model-based RCCT to original NCAT images as ground truth shows best agreement at motion states whose displacement-sorted images have no missing slices, with mean and maximum discrepancies in lung of 1 and 3 mm, respectively. Larger discrepancies correlate with motion states having a larger number of missing slices in the displacement-sorted images. Artifacts in patient images at different motion states are also reduced. Comparison with displacement-sorted patient images as a ground truth shows that the model-based images closely reproduce the ground truth geometry at different motion states. Results in phantom and patient images indicate that the proposed method can produce RCCT image sets with reduced artifacts relative to phase-sorted images, without the gaps inherent in displacement-sorted images. The method requires a reference image at one motion state that has no missing data. Highly irregular breathing patterns can affect the method's performance, by introducing artifacts in the reference image (although reduced relative to phase-sorted images), or in decreased accuracy in the image prediction of motion states containing large regions of missing data. © 2012 American Association of Physicists in Medicine.
a Critical Review of Automated Photogrammetric Processing of Large Datasets
NASA Astrophysics Data System (ADS)
Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F.
2017-08-01
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
NASA Astrophysics Data System (ADS)
Staver, John R.
2010-03-01
Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant chapter in humans' cultural heritage before and since the Enlightenment. Truth, knowledge, and their relation are central to science and religion as ways of knowing, as social institutions, and to their interaction. In religion, truth is revealed through God's word. In science, truth is sought after via empirical methods. Discord can be viewed as a competition for social legitimization between two social institutions whose goals are explaining the world and how it works. Under this view, the root of the discord is truth as correspondence. In this concept of truth, knowledge corresponds to the facts of reality, and conflict is inevitable for many because humans want to ask which one—science or religion—gets the facts correct. But, the root paradox, also known as the problem of the criterion, suggests that seeking to know nature as it is represents a fruitless endeavor. The discord can be set on new ground and resolved by taking a moderately skeptical line of thought, one which employs truth as coherence and a moderate form of constructivist epistemology. Quantum mechanics and evolution as scientific theories and scientific research on human consciousness and vision provide support for this line of argument. Within a constructivist perspective, scientists would relinquish only the pursuit of knowing reality as it is. Scientists would retain everything else. Believers who hold that religion explains reality would come to understand that God never revealed His truth of nature; rather, He revealed His truth in how we are to conduct our lives.
2000-09-01
and the Porphyry Copper District (PCD) of east central Arizona and south west New Mexico were used in gathering ground truth ranging from mine records...previous studies of large coal cast blasting operations in Wyoming that trigger the IMS (Hedlin et al. 2000), the porphyry copper region of Arizona and...local mines producing the sources. Close cooperation has been developed with the Phelps Dodge mines in Morenci, Arizona and Tyrone, New Mexico where in
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
New SHARE 2010 HSI-LiDAR dataset: re-calibration, detection assessment and delivery
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.
2016-09-01
This paper revisits hyperspectral data collected from the SpecTIR hyperspectral airborne Rochester Experiment (SHARE) in 2010. It has been determined that there were calibration issues in the SWIR portion of the data. This calibration issue is discussed and has been rectified. Approaches for calibration to radiance and compensation to reflectance are discussed based on in-scene information and radiative transfer codes. In addition to the entire flight line, a much large target detection test and evaluation chip has been created which includes an abundance of potential false alarms. New truth masks are created along with results from target detection algorithms. Co-registered LiDAR data is also presented. Finally, all ground truth information (ground photos, metadata, MODTRAN tape5, ASD ground spectral measurements, target truth masks, etc.), in addition to the HSI flight lines and co-registered LiDAR data, has been organized, packaged and uploaded to the Center for Imaging Science / Digital Imaging and Remote Sensing Lab web server for public use.
Canny edge-based deformable image registration
NASA Astrophysics Data System (ADS)
Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping
2017-02-01
This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.
The ground truth about metadata and community detection in networks.
Peel, Leto; Larremore, Daniel B; Clauset, Aaron
2017-05-01
Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.
Daepp, Madeleine Ig; Black, Jennifer
2017-10-01
The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.
NASA Technical Reports Server (NTRS)
Jones, E. B.
1975-01-01
The soil moisture ground-truth measurements and ground-cover descriptions taken at three soil moisture survey sites located near Lafayette, Indiana; St. Charles, Missouri; and Centralia, Missouri are given. The data were taken on November 10, 1975, in connection with airborne remote sensing missions being flown by the Environmental Research Institute of Michigan under the auspices of the National Aeronautics and Space Administration. Emphasis was placed on the soil moisture in bare fields. Soil moisture was sampled in the top 0 to 1 in. and 0 to 6 in. by means of a soil sampling push tube. These samples were then placed in plastic bags and awaited gravimetric analysis.
Snowpack ground truth: Radar test site, Steamboat Springs, Colorado, 8-16 April 1976
NASA Technical Reports Server (NTRS)
Howell, S.; Jones, E. B.; Leaf, C. F.
1976-01-01
Ground-truth data taken at Steamboat Springs, Colorado is presented. Data taken during the period April 8, 1976 - April 16, 1976 included the following: (1) snow depths and densities at selected locations (using a Mount Rose snow tube); (2) snow pits for temperature, density, and liquid water determinations using the freezing calorimetry technique and vertical layer classification; (3) snow walls were also constructed of various cross sections and documented with respect to sizes and snow characteristics; (4) soil moisture at selected locations; and (5) appropriate air temperature and weather data.
Standardized UXO Technology Demonstration Site Moguls Scoring Record Number 912 (Sky Research, Inc.)
2008-09-01
south from the northern end point. 8 2) A metallic pin-flag is placed over the midpoint. 3) The operator logs data along the same path...buried UXO or other metallic debris. A 5-meter-length of line is walked in eight cardinal directions (N-S, S-N, E-W, W-E, SE-NW, NW-SE, SW-NE, NE-SW...points have been rounded to protect the ground truth. The overall ground truth is composed of ferrous and nonferrous anomalies. Due to limitations
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Circular tomosynthesis for neuro perfusion imaging on an interventional C-arm
NASA Astrophysics Data System (ADS)
Claus, Bernhard E.; Langan, David A.; Al Assad, Omar; Wang, Xin
2015-03-01
There is a clinical need to improve cerebral perfusion assessment during the treatment of ischemic stroke in the interventional suite. The clinician is able to determine whether the arterial blockage was successfully opened but is unable to sufficiently assess blood flow through the parenchyma. C-arm spin acquisitions can image the cerebral blood volume (CBV) but are challenged to capture the temporal dynamics of the iodinated contrast bolus, which is required to derive, e.g., cerebral blood flow (CBF) and mean transit time (MTT). Here we propose to utilize a circular tomosynthesis acquisition on the C-arm to achieve the necessary temporal sampling of the volume at the cost of incomplete data. We address the incomplete data problem by using tools from compressed sensing and incorporate temporal interpolation to improve our temporal resolution. A CT neuro perfusion data set is utilized for generating a dynamic (4D) volumetric model from which simulated tomo projections are generated. The 4D model is also used as a ground truth reference for performance evaluation. The performance that may be achieved with the tomo acquisition and 4D reconstruction (under simulation conditions, i.e., without considering data fidelity limitations due to imaging physics and imaging chain) is evaluated. In the considered scenario, good agreement between the ground truth and the tomo reconstruction in the parenchyma was achieved.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
WE-AB-202-09: Feasibility and Quantitative Analysis of 4DCT-Based High Precision Lung Elastography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasse, K; Neylon, J; Low, D
2016-06-15
Purpose: The purpose of this project is to derive high precision elastography measurements from 4DCT lung scans to facilitate the implementation of elastography in a radiotherapy context. Methods: 4DCT scans of the lungs were acquired, and breathing stages were subsequently registered to each other using an optical flow DIR algorithm. The displacement of each voxel gleaned from the registration was taken to be the ground-truth deformation. These vectors, along with the 4DCT source datasets, were used to generate a GPU-based biomechanical simulation that acted as a forward model to solve the inverse elasticity problem. The lung surface displacements were appliedmore » as boundary constraints for the model-guided lung tissue elastography, while the inner voxels were allowed to deform according to the linear elastic forces within the model. A biomechanically-based anisotropic convergence magnification technique was applied to the inner voxels in order to amplify the subtleties of the interior deformation. Solving the inverse elasticity problem was accomplished by modifying the tissue elasticity and iteratively deforming the biomechanical model. Convergence occurred when each voxel was within 0.5 mm of the ground-truth deformation and 1 kPa of the ground-truth elasticity distribution. To analyze the feasibility of the model-guided approach, we present the results for regions of low ventilation, specifically, the apex. Results: The maximum apical boundary expansion was observed to be between 2 and 6 mm. Simulating this expansion within an apical lung model, it was observed that 100% of voxels converged within 0.5 mm of ground-truth deformation, while 91.8% converged within 1 kPa of the ground-truth elasticity distribution. A mean elasticity error of 0.6 kPa illustrates the high precision of our technique. Conclusion: By utilizing 4DCT lung data coupled with a biomechanical model, high precision lung elastography can be accurately performed, even in low ventilation regions of the lungs. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144087.« less
The ground truth about metadata and community detection in networks
Peel, Leto; Larremore, Daniel B.; Clauset, Aaron
2017-01-01
Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures. PMID:28508065
Comparing the accuracy of food outlet datasets in an urban environment.
Wong, Michelle S; Peyton, Jennifer M; Shields, Timothy M; Curriero, Frank C; Gudzune, Kimberly A
2017-05-11
Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location) has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA) and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP), an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV) of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7%) and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%). We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.
NASA Astrophysics Data System (ADS)
Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.
1996-12-01
Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.
The Conflict between Science and Religion: A Discussion on the Possibilities for Settlement
ERIC Educational Resources Information Center
Falcao, Eliane Brigida Morais
2010-01-01
In his article "Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?", John Staver identifies what he considers to be the source of the conflicts between science and religion: the establishment of the relationship between truth and knowledge, from the perspective of those…
Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.
Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart
2014-10-01
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
Simonsen, Daniel; Nielsen, Ida F; Spaich, Erika G; Andersen, Ole K
2017-05-02
The present paper describes the design and evaluation of an automated version of the Modified Jebsen Test of Hand Function (MJT) based on the Microsoft Kinect sensor. The MJT was administered twice to 11 chronic stroke subjects with varying degrees of hand function deficits. The test times of the MJT were evaluated manually by a therapist using a stopwatch, and automatically using the Microsoft Kinect sensor. The ground truth times were assessed based on inspection of the video-recordings. The agreement between the methods was evaluated along with the test-retest performance. The results from Bland-Altman analysis showed better agreement between the ground truth times and the automatic MJT time evaluations compared to the agreement between the ground truth times and the times estimated by the therapist. The results from the test-retest performance showed that the subjects significantly improved their performance in several subtests of the MJT, indicating a practice effect. The results from the test showed that the Kinect can be used for automating the MJT.
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
AMS Ground Truth Measurements: Calibrations and Test Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasiolek, Piotr T.
2015-12-01
Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima NPP accident in March-May 2011. To map ground contamination, a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count data, expressed in counts per second (cps), need to be converted to a terrestrial component ofmore » the exposure rate at 1 meter (m) above ground, or surface activity of the isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large-scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, because production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish a common calibration line very early into the event. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and that are potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.« less
AMS Ground Truth Measurements: Calibration and Test Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasiolek, P.
2013-11-01
Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima nuclear power plant (NPP) accident in March-May 2011. To map ground contamination a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count rate data expressed in counts per second (cps) needs to be converted tomore » the terrestrial component of the exposure rate 1 m above ground, or surface activity of isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, as the production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish very early into the event a common calibration line. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements. This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.« less
The GSFC Mark-2 three band hand-held radiometer. [thematic mapper for ground truth data collection
NASA Technical Reports Server (NTRS)
Tucker, C. J.; Jones, W. H.; Kley, W. A.; Sundstrom, G. J.
1980-01-01
A self-contained, portable, hand-radiometer designed for field usage was constructed and tested. The device, consisting of a hand-held probe containing three sensors and a strap supported electronic module, weighs 4 1/2 kilograms. It is powered by flashlight and transistor radio batteries, utilizes two silicon and one lead sulfide detectors, has three liquid crystal displays, sample and hold radiometric sampling, and its spectral configuration corresponds to LANDSAT-D's thematic mapper bands. The device was designed to support thematic mapper ground-truth data collection efforts and to facilitate 'in situ' ground-based remote sensing studies of natural materials. Prototype instruments were extensively tested under laboratory and field conditions with excellent results.
Ground Truth Sampling and LANDSAT Accuracy Assessment
NASA Technical Reports Server (NTRS)
Robinson, J. W.; Gunther, F. J.; Campbell, W. J.
1982-01-01
It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.
NASA Technical Reports Server (NTRS)
Botkin, Daniel B.
1987-01-01
The analysis of ground-truth data from the boreal forest plots in the Superior National Forest, Minnesota, was completed. Development of statistical methods was completed for dimension analysis (equations to estimate the biomass of trees from measurements of diameter and height). The dimension-analysis equations were applied to the data obtained from ground-truth plots, to estimate the biomass. Classification and analyses of remote sensing images of the Superior National Forest were done as a test of the technique to determine forest biomass and ecological state by remote sensing. Data was archived on diskette and tape and transferred to UCSB to be used in subsequent research.
Machine processing of ERTS and ground truth data
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Peacock, K.
1973-01-01
The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.
1997-09-05
explosions be used as sources of ground truth information? Can these sources be used as surrogates for single-fired explosions in regions where no such... sources exist? (Stump) 3. Is there a single regional discriminant that will work for all mining explosions or will it be necessary to apply a suite of...the Treaty be used to take advantage of mining sources as Ground Truth information? Is it possible to use such information to "finger print" mines
Recommended data sets, corn segments and spring wheat segments, for use in program development
NASA Technical Reports Server (NTRS)
Austin, W. W. (Principal Investigator)
1981-01-01
The sets of Large Area Crop Inventory Experiment sites, crop year 1978, which are recommended for use in the development and evaluation of classification techniques based on LANDSAT spectral data are presented. For each site, the following exists: (1) accuracy assessment digitized ground truth; (2) a minimum of 5 percent of the scene ground truth identified as corn or spring wheat; and (3) at least four acquisitions of acceptable data quality during the growing season of the crop of interest. The recommended data sets consist of 41 corn/soybean sites and 17 spring wheat sites.
Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel
NASA Astrophysics Data System (ADS)
Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.
2013-12-01
We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the infrasonic phases of the two distant arrays; 2) a standard robust grid-search location procedure based on phase picks and a constant celerity for a phase (tropospheric or stratospheric) was applied; 3) a joint coordinate grid-search procedure using array waveforms and phase picks was tested, 4) the Bayesian Infrasonic Source Localization (BISL) method, incorporating semi-empirical model-based prior information, was modified for array+network configuration and applied to the ground-truth events. For this purpose we accumulated data of the former observations of the air-to-ground infrasonic phases to compute station specific ground-truth Celerity-Range Histograms (ssgtCRH) and/or model-based CRH (mbCRH), which allow to essentially improve the location results. For building the mbCRH the local meteo-data and the ray-tracing modeling in 3 available azimuth ranges, accounting seasonal variations of winds directivity (quadrants North:315-45, South: 135-225, East 45-135) have been used.
Science, Religion, and the Quest for Knowledge and Truth: An Islamic Perspective
ERIC Educational Resources Information Center
Guessoum, Nidhal
2010-01-01
This article consists of two parts. The first one is to a large extent a commentary on John R. Staver's "Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?" The second part is a related overview of Islam's philosophy of knowledge and, to a certain degree, science. In…
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
NASA Astrophysics Data System (ADS)
Trabant, Chad; Thurber, Clifford; Leith, William
2002-07-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km 2 (90% CE: 5.1 km 2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40° epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km 2 (90% CE: 438 km 2), and the other two covering 1730 and 8869 km 2 (90% CE: 1331 and 6822 km 2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km 2 for all events having more than three observations.
Consensus-Based Sorting of Neuronal Spike Waveforms.
Fournier, Julien; Mueller, Christian M; Shein-Idelson, Mark; Hemberger, Mike; Laurent, Gilles
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained "ground truth" data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data.
NASA Astrophysics Data System (ADS)
Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg
2017-03-01
Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.
Measuring Soil Moisture in Skeletal Soils Using a COSMOS Rover
NASA Astrophysics Data System (ADS)
Medina, C.; Neely, H.; Desilets, D.; Mohanty, B.; Moore, G. W.
2017-12-01
The presence of coarse fragments directly influences the volumetric water content of the soil. Current surface soil moisture sensors often do not account for the presence of coarse fragments, and little research has been done to calibrate these sensors under such conditions. The cosmic-ray soil moisture observation system (COSMOS) rover is a passive, non-invasive surface soil moisture sensor with a footprint greater than 100 m. Despite its potential, the COSMOS rover has yet to be validated in skeletal soils. The goal of this study was to validate measurements of surface soil moisture as taken by a COSMOS rover on a Texas skeletal soil. Data was collected for two soils, a Marfla clay loam and Chinati-Boracho-Berrend association, in West Texas. Three levels of data were collected: 1) COSMOS surveys at three different soil moistures, 2) electrical conductivity surveys within those COSMOS surveys, and 3) ground-truth measurements. Surveys with the COSMOS rover covered an 8000-h area and were taken both after large rain events (>2") and a long dry period. Within the COSMOS surveys, the EM38-MK2 was used to estimate the spatial distribution of coarse fragments in the soil around two COSMOS points. Ground truth measurements included coarse fragment mass and volume, bulk density, and water content at 3 locations within each EM38 survey. Ground-truth measurements were weighted using EM38 data, and COSMOS measurements were validated by their distance from the samples. There was a decrease in water content as the percent volume of coarse fragment increased. COSMOS estimations responded to both changes in coarse fragment percent volume and the ground-truth volumetric water content. Further research will focus on creating digital soil maps using landform data and water content estimations from the COSMOS rover.
NASA Astrophysics Data System (ADS)
Robinson, D. Q.
2001-05-01
Hampton University, a historically black university, is leading the Education and Public Outreach (EPO) portion of the PICASSO-CENA satellite-based research mission. Currently scheduled for launch in 2004, PICASSO-CENA will use LIDAR (LIght Detection and Ranging), to study earth's atmosphere. The PICASSO-CENA Outreach program works with scientists, teachers, and students to better understand the effects of clouds and aerosols on earth's atmosphere. This program actively involves students nationwide in NASA research by having them obtain sun photometer measurements from their schools and homes for comparison with data collected by the PICASSO-CENA mission. Students collect data from their classroom ground observations and report the data via the Internet. Scientists will use the data from the PICASSO-CENA research and the student ground-truthing observations to improve predications about climatic change. The two-band passive remote sensing sun photometer is designed for student use as a stand alone instrument to study atmospheric turbidity or in conjunction with satellite data to provide ground-truthing. The instrument will collect measurements of column optical depth from the ground level. These measurements will not only give the students an appreciation for atmospheric turbidity, but will also provide quantitative correlative information to the PICASSO-CENA mission on ground-level optical depth. Student data obtained in this manner will be sufficiently accurate for scientists to use as ground truthing. Thus, students will have the opportunity to be involved with a NASA satellite-based research mission.
Beyond illusion: Psychoanalysis and the question of religious truth.
Blass, Rachel B
2004-06-01
In this paper the author critically examines the nature of the positive, reconciliatory attitude towards religion that has become increasingly prevalent within psychoanalytic thinking and writing over the past 20 years. She shows how this positive attitude rests on a change in the nature of the prototype of religion and its reassignment to the realm of illusion, thus making irrelevant an issue most central both to psychoanalysis and to traditional Judeo-Christian belief--the passionate search for truth. The author demonstrates how the concern with truth, and specifically with the truth of religious claims, lies at the basis of the opposition between psychoanalysis and religion but, paradoxically, also provides the common ground for dialogue between the two. She argues that, as Freud developed his ideas regarding the origin of conviction in religious claims in his Moses and monotheism (1939), the nature of this common ground was expanded and the dialogue became potentially more meaningful. The author concludes that meaningful dialogue emerges through recognition of fundamental differences rather than through harmonisation within a realm of illusion. In this light, the present study may also be seen as an attempt to recognise fundamental differences that have been evolving within psychoanalysis itself.
Inferring tie strength from online directed behavior.
Jones, Jason J; Settle, Jaime E; Bond, Robert M; Fariss, Christopher J; Marlow, Cameron; Fowler, James H
2013-01-01
Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions) to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user's friends. More private communications (messages) were not necessarily more informative than public communications (comments, wall posts, and other interactions).
Ground-Based Remote Sensing of Water-Stressed Crops: Thermal and Multispectral Imaging
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truthing for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted ...
Ground-based thermal and multispectral imaging of limited irrigation crops
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truth for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted in ...
NASA Astrophysics Data System (ADS)
Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin
2015-03-01
Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.
NASA Technical Reports Server (NTRS)
Dixon, C. M.
1981-01-01
Land cover information derived from LANDSAT is being utilized by Piedmont Planning District Commission located in the State of Virginia. Progress to date is reported on a level one land cover classification map being produced with nine categories. The nine categories of classification are defined. The computer compatible tape selection is presented. Two unsupervised classifications were done, with 50 and 70 classes respectively. Twenty-eight spectral classes were developed using the supervised technique, employing actual ground truth training sites. The accuracy of the unsupervised classifications are estimated through comparison with local county statistics and with an actual pixel count of LANDSAT information compared to ground truth.
ASM Based Synthesis of Handwritten Arabic Text Pages
Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059
Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.
Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K
2017-07-27
Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.
ASM Based Synthesis of Handwritten Arabic Text Pages.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.
Sweet-spot training for early esophageal cancer detection
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.
2016-03-01
Over the past decade, the imaging tools for endoscopists have improved drastically. This has enabled physicians to visually inspect the intestinal tissue for early signs of malignant lesions. Besides this, recent studies show the feasibility of supportive image analysis for endoscopists, but the analysis problem is typically approached as a segmentation task where binary ground truth is employed. In this study, we show that the detection of early cancerous tissue in the gastrointestinal tract cannot be approached as a binary segmentation problem and it is crucial and clinically relevant to involve multiple experts for annotating early lesions. By employing the so-called sweet spot for training purposes as a metric, a much better detection performance can be achieved. Furthermore, a multi-expert-based ground truth, i.e. a golden standard, enables an improved validation of the resulting delineations. For this purpose, besides the sweet spot we also propose another novel metric, the Jaccard Golden Standard (JIGS) that can handle multiple ground-truth annotations. Our experiments involving these new metrics and based on the golden standard show that the performance of a detection algorithm of early neoplastic lesions in Barrett's esophagus can be increased significantly, demonstrating a 10 percent point increase in the resulting F1 detection score.
Development of a Scalable Testbed for Mobile Olfaction Verification.
Zakaria, Syed Muhammad Mamduh Syed; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Yeon, Ahmad Shakaff Ali; Md Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah
2015-12-09
The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment.
Development of a Scalable Testbed for Mobile Olfaction Verification
Syed Zakaria, Syed Muhammad Mamduh; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Ali Yeon, Ahmad Shakaff; Md. Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah
2015-01-01
The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment. PMID:26690175
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Cai, J
Purpose: To develop a technique to generate on-board VC-MRI using patient prior 4D-MRI, motion modeling and on-board 2D-cine MRI for real-time 3D target verification of liver and lung radiotherapy. Methods: The end-expiration phase images of a 4D-MRI acquired during patient simulation are used as patient prior images. Principal component analysis (PCA) is used to extract 3 major respiratory deformation patterns from the Deformation Field Maps (DFMs) generated between end-expiration phase and all other phases. On-board 2D-cine MRI images are acquired in the axial view. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI atmore » the end-expiration phase. The DFM is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by matching the corresponding 2D slice of the estimated VC-MRI with the acquired single 2D-cine MRI. The method was evaluated using both XCAT (a computerized patient model) simulation of lung cancer patients and MRI data from a real liver cancer patient. The 3D-MRI at every phase except end-expiration phase was used to simulate the ground-truth on-board VC-MRI at different instances, and the center-tumor slice was selected to simulate the on-board 2D-cine images. Results: Image subtraction of ground truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground truth with prior image. Excellent agreement between profiles was achieved. The normalized cross correlation coefficients between the estimated and ground-truth in the axial, coronal and sagittal views for each time step were >= 0.982, 0.905, 0.961 for XCAT data and >= 0.998, 0.911, 0.9541 for patient data. For XCAT data, the maximum-Volume-Percent-Difference between ground-truth and estimated tumor volumes was 1.6% and the maximum-Center-of-Mass-Shift was 0.9 mm. Conclusion: Preliminary studies demonstrated the feasibility to estimate real-time VC-MRI for on-board target localization before or during radiotherapy treatments. National Institutes of Health Grant No. R01-CA184173; Varian Medical System.« less
Collaboration and Conflict Resolution in Education.
ERIC Educational Resources Information Center
Melamed, James C.; Reiman, John W.
2000-01-01
Presents guidelines for resolving conflicts between educators and parents. Participants should seek different perspectives, not "truths," consider the common ground, define an effective problem-solving procedure, adopt ground rules for discussion, address issues, identify interests and positive intentions, develop options, select arrangements, and…
NASA Astrophysics Data System (ADS)
Park, Junghyun; Hayward, Chris; Stump, Brian W.
2018-06-01
Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.
Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing
NASA Astrophysics Data System (ADS)
Meng, X.
2012-07-01
Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.
Pest measurement and management
USDA-ARS?s Scientific Manuscript database
Pest scouting, whether it is done only with ground scouting methods or using remote sensing with some ground-truthing, is an important tool to aid site-specific crop management. Different pests may be monitored at different times and using different methods. Remote sensing has the potential to provi...
Land use and land cover mapping: City of Palm Bay, Florida
NASA Technical Reports Server (NTRS)
Barile, D. D.; Pierce, R.
1977-01-01
Two different computer systems were compared for use in making land use and land cover maps. The Honeywell 635 with the LANDSAT signature development program (LSDP) produced a map depicting general patterns, but themes were difficult to classify as specific land use. Urban areas were unclassified. The General Electric Image 100 produced a map depicting eight land cover categories classifying 68 percent of the total area. Ground truth, LSDP, and Image 100 maps were all made to the same scale for comparison. LSDP agreed with the ground truth 60 percent and 64 percent within the two test areas compared and Image 100 was in agreement 70 percent and 80 percent.
Erosional and depositional history of central Chryse Planitia
NASA Technical Reports Server (NTRS)
Crumpler, L. S.
1992-01-01
This map uses high resolution image data to assess the detailed depositional and erosional history of part of Chryse Planitia. This area is significant to the study of the global geology of Mars because it represents one of only two areas on the martian surface where planetary geologic mapping is assisted with 'ground truth.' In this case the ground truth was provided by Viking Lander 1. Additional questions addressed in this study are concerned with the following: the geologic context of the regional plains surface and the local surface of the Viking Lander 1 site; and the relative influence of volcanic, sedimentary, impact, aeolian, and tectonic processes at the regional and local scales.
NASA Astrophysics Data System (ADS)
Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki
We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.
The Kenya rangeland ecological monitoring unit
NASA Technical Reports Server (NTRS)
Stevens, W. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Methodology for aerial surveys and ground truth studies was developed, tested, and revised several times to produce reasonably firm methods of procedure. Computer programs were adapted or developed to analyze, store, and recall data from the ground and air monitoring surveys.
NASA Astrophysics Data System (ADS)
Davies, Jaime S.; Howell, Kerry L.; Stewart, Heather A.; Guinan, Janine; Golding, Neil
2014-06-01
In 2007, the upper part of a submarine canyon system located in water depths between 138 and 1165 m in the South West (SW) Approaches (North East Atlantic Ocean) was surveyed over a 2 week period. High-resolution multibeam echosounder data covering 1106 km2, and 44 ground-truthing video and image transects were acquired to characterise the biological assemblages of the canyons. The SW Approaches is an area of complex terrain, and intensive ground-truthing revealed the canyons to be dominated by soft sediment assemblages. A combination of multivariate analysis of seabed photographs (184-1059 m) and visual assessment of video ground-truthing identified 12 megabenthic assemblages (biotopes) at an appropriate scale to act as mapping units. Of these biotopes, 5 adhered to current definitions of habitats of conservation concern, 4 of which were classed as Vulnerable Marine Ecosystems. Some of the biotopes correspond to descriptions of communities from other megahabitat features (for example the continental shelf and seamounts), although it appears that the canyons host modified versions, possibly due to the inferred high rates of sedimentation in the canyons. Other biotopes described appear to be unique to canyon features, particularly the sea pen biotope consisting of Kophobelemnon stelliferum and cerianthids.
Spatio-temporal evaluation of plant height in corn via unmanned aerial systems
NASA Astrophysics Data System (ADS)
Varela, Sebastian; Assefa, Yared; Vara Prasad, P. V.; Peralta, Nahuel R.; Griffin, Terry W.; Sharda, Ajay; Ferguson, Allison; Ciampitti, Ignacio A.
2017-07-01
Detailed spatial and temporal data on plant growth are critical to guide crop management. Conventional methods to determine field plant traits are intensive, time-consuming, expensive, and limited to small areas. The objective of this study was to examine the integration of data collected via unmanned aerial systems (UAS) at critical corn (Zea mays L.) developmental stages for plant height and its relation to plant biomass. The main steps followed in this research were (1) workflow development for an ultrahigh resolution crop surface model (CSM) with the goal of determining plant height (CSM-estimated plant height) using data gathered from the UAS missions; (2) validation of CSM-estimated plant height with ground-truthing plant height (measured plant height); and (3) final estimation of plant biomass via integration of CSM-estimated plant height with ground-truthing stem diameter data. Results indicated a correlation between CSM-estimated plant height and ground-truthing plant height data at two weeks prior to flowering and at flowering stage, but high predictability at the later growth stage. Log-log analysis on the temporal data confirmed that these relationships are stable, presenting equal slopes for both crop stages evaluated. Concluding, data collected from low-altitude and with a low-cost sensor could be useful in estimating plant height.
NASA Technical Reports Server (NTRS)
Wright, F. F. (Principal Investigator); Sharma, G. D.; Burns, J. J.
1973-01-01
The author has identified the following significant results. Even though nonsynchronous, the ERTS-1 imagery of November 4, 1972, showed a striking similarity to the ground truth data obtained in late August and September, 1972. The comparison of the images with ground truth data revealed that the general water circulation pattern in Lower Cook Inlet is consistent through the Fall season and that ERTS-1 images in MSS bands 4 and 5 are capable of delineating water masses with a suspended load as low as 1 mg/liter. The ERTS-1 data and the ground truth data demonstrate clearly that the coriolis effect dominates circulation in Lower Cook Inlet. The configuration of plumes in Nushagak and Kuskokwim bays further indicates the influence of the coriolis effect on the movement of sea water at high latitudes. Comparison of MSS bands 4, 5, 6, and 7 suggest MSS-1 penetration of several meters into the water column. Sea ice analysis of available imagery was exceptionally rewarding. The imagery provided a rapid method to delineate and describe the ice types apparent in the photos. The ice types ranged from newly formed grease ice to heavy flows of disintegrating shore-fast ice. Sea ice maps showing the extent of different ice zones in the Chukchi Sea are being compiled.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
Predicted seafloor facies of Central Santa Monica Bay, California
Dartnell, Peter; Gardner, James V.
2004-01-01
Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
Inferring Tie Strength from Online Directed Behavior
Jones, Jason J.; Settle, Jaime E.; Bond, Robert M.; Fariss, Christopher J.; Marlow, Cameron; Fowler, James H.
2013-01-01
Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions) to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user’s friends. More private communications (messages) were not necessarily more informative than public communications (comments, wall posts, and other interactions). PMID:23300964
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.
Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip
Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
Trabant, C.; Thurber, C.; Leith, W.
2002-01-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Biondo, Manuela; Bartholomä, Alexander
2017-04-01
One of the burning issues on the topic of acoustic seabed classification is the lack of solid, repeatable, statistical procedures that can support the verification of acoustic variability in relation to seabed properties. Acoustic sediment classification schemes often lead to biased and subjective interpretation, as they ultimately aim at an oversimplified categorization of the seabed based on conventionally defined sediment types. However, grain size variability alone cannot be accounted for acoustic diversity, which will be ultimately affected by multiple physical processes, scale of heterogeneity, instrument settings, data quality, image processing and segmentation performances. Understanding and assessing the weight of all of these factors on backscatter is a difficult task, due to the spatially limited and fragmentary knowledge of the seabed from of direct observations (e.g. grab samples, cores, videos). In particular, large-scale mapping requires an enormous availability of ground-truthing data that is often obtained from heterogeneous and multidisciplinary sources, resulting into a further chance of misclassification. Independently from all of these limitations, acoustic segments still contain signals for seabed changes that, if appropriate procedures are established, can be translated into meaningful knowledge. In this study we design a simple, repeatable method, based on multivariate procedures, with the scope to classify a 100 km2, high-frequency (450 kHz) sidescan sonar mosaic acquired in the year 2012 in the shallow upper-mesotidal inlet of the Jade Bay (German North Sea coast). The tool used for the automated classification of the backscatter mosaic is the QTC SWATHVIEWTMsoftware. The ground-truthing database included grab sample data from multiple sources (2009-2011). The method was designed to extrapolate quantitative descriptors for acoustic backscatter and model their spatial changes in relation to grain size distribution and morphology. The modelled relationships were used to: 1) asses the automated segmentation performance, 2) obtain a ranking of most discriminant seabed attributes responsible for acoustic diversity, 3) select the best-fit ground-truthing information to characterize each acoustic class. Using a supervised Linear Discriminant Analysis (LDA), relationships between seabed parameters and acoustic classes discrimination were modelled, and acoustic classes for each data point were predicted. The model predicted a success rate of 63.5%. An unsupervised LDA was used to model relationships between acoustic variables and clustered seabed categories with the scope of identifying misrepresentative ground-truthing data points. The model prediction scored a success rate of 50.8%. Misclassified data points were disregarded for final classification. Analyses led to clearer, more accurate appreciation of relationship patterns and improved understanding of site-specific processes affecting the acoustic signal. Value to the qualitative classification output was added by comparing the latter with a more recent set of acoustic and ground-truthing information (2014). Classification resulted in the first acoustic sediment map ever produced in the area and offered valuable knowledge for detailed sediment variability. The method proved to be a simple, repeatable strategy that may be applied to similar work and environments.
ALP FOPEN Site Description and Ground Truth Summary
1990-02-01
equations describing the destribution of above ground biomass for the various tree species; and (6) dielectric measurements of the two major’ tree...does not physically alter the tree layer being sampled by pressing too hard with the dielectric probe. In design of an experiment to collect dielectric
Fast and accurate reference-free alignment of subtomograms.
Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich
2013-06-01
In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.
Rice crop growth monitoring using ENVISAT-1/ASAR AP mode
NASA Astrophysics Data System (ADS)
Konishi, Tomohisa; Suga, Yuzo; Omatu, Shigeru; Takeuchi, Shoji; Asonuma, Kazuyoshi
2007-10-01
Hiroshima Institute of Technology (HIT) is operating the direct down-links of microwave and optical earth observation satellite data in Japan. This study focuses on the validation for rice crop monitoring using microwave remotely sensed image data acquired by ENIVISAT-1 referring to ground truth data such as height of rice crop, vegetation cover rate and leaf area index in the test sites of Hiroshima district, the western part of Japan. ENVISAT-1/ASAR data has the capabilities for the monitoring of the rice crop growing cycle by using alternating cross polarization mode images. However, ASAR data is influenced by several parameters such as land cover structure, direction and alignment of rice crop fields in the test sites. In this study, the validation was carried out to be combined with microwave image data and ground truth data regarding rice crop fields to investigate the above parameters. Multi-temporal, multi-direction (descending and ascending) and multi-angle ASAR alternating cross polarization mode images were used to investigate during the rice crop growing cycle. On the other hand, LANDSAT-7/ETM+ data were used to detect land cover structure, direction and alignment of rice crop fields corresponding to the backscatter of ASAR. Finally, the extraction of rice planted area was attempted by using multi-temporal ASAR AP mode data such as VV/VH and HH/HV. As the result of this study, it is clear that the estimated rice planted area coincides with the existing statistical data for area of the rice crop field. In addition, HH/HV is more effective than VV/VH in the rice planted area extraction.
Children's Reasoning about Lie-Telling and Truth-Telling in Politeness Contexts
ERIC Educational Resources Information Center
Heyman, Gail D.; Sweet, Monica A.; Lee, Kang
2009-01-01
Children's reasoning about lying and truth-telling was examined among participants ages 7-11 (total N = 181) with reference to conflicts between being honest and protecting the feelings of others. In Study 1, participants showed different patterns of evaluation and motivational inference in politeness contexts vs. transgression contexts: in…
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA. PMID:29854262
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA.
Evaluating segmentation error without ground truth.
Kohlberger, Timo; Singh, Vivek; Alvino, Chris; Bahlmann, Claus; Grady, Leo
2012-01-01
The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.
Generating high precision ionospheric ground-truth measurements
NASA Technical Reports Server (NTRS)
Komjathy, Attila (Inventor); Sparks, Lawrence (Inventor); Mannucci, Anthony J. (Inventor)
2007-01-01
A method, apparatus and article of manufacture provide ionospheric ground-truth measurements for use in a wide-area augmentation system (WAAS). Ionospheric pseudorange/code and carrier phase data as primary observables is received by a WAAS receiver. A polynomial fit is performed on the phase data that is examined to identify any cycle slips in the phase data. The phase data is then leveled. Satellite and receiver biases are obtained and applied to the leveled phase data to obtain unbiased phase-leveled ionospheric measurements that are used in a WAAS system. In addition, one of several measurements may be selected and data is output that provides information on the quality of the measurements that are used to determine corrective messages as part of the WAAS system.
NASA Astrophysics Data System (ADS)
Tang, Ronglin; Li, Zhao-Liang; Sun, Xiaomin; Bi, Yuyun
2017-01-01
Surface evapotranspiration (ET) is an important component of water and energy in land and atmospheric systems. This paper investigated whether using variable surface resistances in the reference ET estimates from the full-form Penman-Monteith (PM) equation could improve the upscaled daily ET estimates in the constant reference evaporative fraction (EFr, the ratio of actual to reference grass/alfalfa ET) method on clear-sky days using ground-based measurements. Half-hourly near-surface meteorological variables and eddy covariance (EC) system-measured latent heat flux data on clear-sky days were collected at two sites with different climatic conditions, namely, the subhumid Yucheng station in northern China and the arid Yingke site in northwestern China and were used as the model input and ground-truth, respectively. The results showed that using the Food and Agriculture Organization (FAO)-PM equation, the American Society of Civil Engineers-PM equation, and the full-form PM equation to estimate the reference ET in the constant EFr method produced progressively smaller upscaled daily ET at a given time from midmorning to midafternoon. Using all three PM equations produced the best results at noon at both sites regardless of whether the energy imbalance of the EC measurements was closed. When the EC measurements were not corrected for energy imbalance, using variable surface resistance in the full-form PM equation could improve the ET upscaling in the midafternoon, but worse results may occur in the midmorning to noon. Site-to-site and time-to-time variations were found in the performances of a given PM equation (with fixed or variable surface resistances) before and after the energy imbalance was closed.
RESOLVE Mission Architecture for Lunar Resource Prospecting and Utilization
NASA Technical Reports Server (NTRS)
George, J. A.; Mattes, G. W.; Rogers, K. N.; Magruder, D. F.; Paz, A. J.; Vaccaro, H. M.; Baird, R. S.; Sanders, G. B.; Smith, J. T.; Quinn, J. W.;
2012-01-01
Design Reference Mission (DRM) evaluations were performed for The Regolith & Environment Science, and Oxygen & Lunar Volatile Extraction (RESOLVE) project to determine future flight mission feasibility and understand potential mission environment impacts on hardware requirements, science/resource assessment objectives, and mission planning. DRM version 2.2 (DRM 2.2) is presented for a notional flight of the RESOLVE payload for lunar resource ground truth and utilization (Figure 1) [1]. The rover/payload deploys on a 10 day surface mission to the Cabeus crater near the lunar south pole in May of 2016. A drill, four primary science instruments, and a high temperature chemical reactor will acquire and characterize water and other volatiles in the near sub-surface, and perform demonstrations of In-Situ Re-source Utilization (ISRU). DRM 2.2 is a reference point, and will be periodically revised to accommodate and incorporate changes to project approach or implementation, and to explore mission alternatives such as landing site or opportunity.
Kim, Oh Seok; Newell, Joshua P
2015-10-01
This paper proposes a new land-change model, the Geographic Emission Benchmark (GEB), as an approach to quantify land-cover changes associated with deforestation and forest degradation. The GEB is designed to determine 'baseline' activity data for reference levels. Unlike other models that forecast business-as-usual future deforestation, the GEB internally (1) characterizes 'forest' and 'deforestation' with minimal processing and ground-truthing and (2) identifies 'deforestation hotspots' using open-source spatial methods to estimate regional rates of deforestation. The GEB also characterizes forest degradation and identifies leakage belts. This paper compares the accuracy of GEB with GEOMOD, a popular land-change model used in the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) Program. Using a case study of the Chinese tropics for comparison, GEB's projection is more accurate than GEOMOD's, as measured by Figure of Merit. Thus, the GEB produces baseline activity data that are moderately accurate for the setting of reference levels.
Learning to Rank the Severity of Unrepaired Cleft Lip Nasal Deformity on 3D Mesh Data.
Wu, Jia; Tse, Raymond; Shapiro, Linda G
2014-08-01
Cleft lip is a birth defect that results in deformity of the upper lip and nose. Its severity is widely variable and the results of treatment are influenced by the initial deformity. Objective assessment of severity would help to guide prognosis and treatment. However, most assessments are subjective. The purpose of this study is to develop and test quantitative computer-based methods of measuring cleft lip severity. In this paper, a grid-patch based measurement of symmetry is introduced, with which a computer program learns to rank the severity of cleft lip on 3D meshes of human infant faces. Three computer-based methods to define the midfacial reference plane were compared to two manual methods. Four different symmetry features were calculated based upon these reference planes, and evaluated. The result shows that the rankings predicted by the proposed features were highly correlated with the ranking orders provided by experts that were used as the ground truth.
On authenticity: the question of truth in construction and autobiography.
Collins, Sara
2011-12-01
Freud was occupied with the question of truth and its verification throughout his work. He looked to archaeology for an evidence model to support his ideas on reconstruction. He also referred to literature regarding truth in reconstruction, where he saw shifts between historical fact and invention, and detected such swings in his own case histories. In his late work Freud pondered over the impossibility of truth in reconstruction by juxtaposing truth with 'probability'. Developments on the role of fantasy and myth in reconstruction and contemporary debates over objectivity have increasingly highlighted the question of 'truth' in psychoanalysis. I will argue that 'authenticity' is a helpful concept in furthering the discussion over truth in reconstruction. Authenticity denotes that which is genuine, trustworthy and emotionally accurate in a reconstruction, as observed within the immediacy of the analyst/patient interaction. As authenticity signifies genuineness in a contemporary context its origins are verifiable through the analyst's own observations of the analytic process itself. Therefore, authenticity is about the likelihood and approximation of historical truth rather than its certainty. In that respect it links with Freud's musings over 'probability'. Developments on writing 'truths' in autobiography mirror those in reconstruction, and lend corroborative support from another source. Copyright © 2011 Institute of Psychoanalysis.
NASA Technical Reports Server (NTRS)
Rust, W. D.; Macgorman, D. R.; Taylor, W.; Arnold, R. T.
1984-01-01
Severe storms and lightning were measured with a NASA U2 and ground based facilities, both fixed base and mobile. Aspects of this program are reported. The following results are presented: (1) ground truth measurements of lightning for comparison with those obtained by the U2. These measurements include flash type identification, electric field changes, optical waveforms, and ground strike location; (2) simultaneous extremely low frequency (ELF) waveforms for cloud to ground (CG) flashes; (3) the CG strike location system (LLP) using a combination of mobile laboratory and television video data are assessed; (4) continued development of analog-to-digital conversion techniques for processing lightning data from the U2, mobile laboratory, and NSSL sensors; (5) completion of an all azimuth TV system for CG ground truth; (6) a preliminary analysis of both IC and CG lightning in a mesocyclone; and (7) the finding of a bimodal peak in altitude lightning activity in some storms in the Great Plains and on the east coast. In the forms on the Great Plains, there was a distinct class of flash what forms the upper mode of the distribution. These flashes are smaller horizontal extent, but occur more frequently than flashes in the lower mode of the distribution.
On Known Unknowns: Fluency and the Neural Mechanisms of Illusory Truth
Wang, Wei-Chun; Brashier, Nadia M.; Wing, Erik A.; Marsh, Elizabeth J.; Cabeza, Roberto
2016-01-01
The “illusory truth” effect refers to the phenomenon whereby repetition of a statement increases its likelihood of being judged true. This phenomenon has important implications for how we come to believe oft-repeated information that may be misleading or unknown. Behavioral evidence indicates that fluency or the subjective ease experienced while processing a statement underlies this effect. This suggests that illusory truth should be mediated by brain regions previously linked to fluency, such as the perirhinal cortex (PRC). To investigate this possibility, we scanned participants with fMRI while they rated the truth of unknown statements, half of which were presented earlier (i.e., repeated). The only brain region that showed an interaction between repetition and ratings of perceived truth was PRC, where activity increased with truth ratings for repeated, but not for new, statements. This finding supports the hypothesis that illusory truth is mediated by a fluency mechanism and further strengthens the link between PRC and fluency. PMID:26765947
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
NASA Astrophysics Data System (ADS)
Coppersmith, R.; Schultz-Fellenz, E. S.; Sussman, A. J.; Vigil, S.; Dzur, R.; Norskog, K.; Kelley, R.; Miller, L.
2015-12-01
While long-term objectives of monitoring and verification regimes include remote characterization and discrimination of surficial geologic and topographic features at sites of interest, ground truth data is required to advance development of remote sensing techniques. Increasingly, it is desirable for these ground-based or ground-proximal characterization methodologies to be as nimble, efficient, non-invasive, and non-destructive as their higher-altitude airborne counterparts while ideally providing superior resolution. For this study, the area of interest is an alluvial site at the Nevada National Security Site intended for use in the Source Physics Experiment's (Snelson et al., 2013) second phase. Ground-truth surface topographic characterization was performed using a DJI Inspire 1 unmanned aerial system (UAS), at very low altitude (< 5-30m AGL). 2D photographs captured by the standard UAS camera payload were imported into Agisoft Photoscan to create three-dimensional point clouds. Within the area of interest, careful installation of surveyed ground control fiducial markers supplied necessary targets for field collection, and information for model georectification. The resulting model includes a Digital Elevation Model derived from 2D imagery. It is anticipated that this flexible and versatile characterization process will provide point cloud data resolution equivalent to a purely ground-based LiDAR scanning deployment (e.g., 1-2cm horizontal and vertical resolution; e.g., Sussman et al., 2012; Schultz-Fellenz et al., 2013). In addition to drastically increasing time efficiency in the field, the UAS method also allows for more complete coverage of the study area when compared to ground-based LiDAR. Comparison and integration of these data with conventionally-acquired airborne LiDAR data from a higher-altitude (~ 450m) platform will aid significantly in the refinement of technologies and detection capabilities of remote optical systems to identify and detect surface geologic and topographic signatures of interest. This work includes a preliminary comparison of surface signatures detected from varying standoff distances to assess current sensor performance and benefits.
NASA Astrophysics Data System (ADS)
Brelsford, Christa; Shepherd, Doug
2013-09-01
In desert cities, securing sufficient water supply to meet the needs of both existing population and future growth is a complex problem with few easy solutions. Grass lawns are a major driver of water consumption and accurate measurements of vegetation area are necessary to understand drivers of changes in household water consumption. Measuring vegetation change in a heterogeneous urban environment requires sub-pixel estimation of vegetation area. Mixture Tuned Match Filtering has been successfully applied to target detection for materials that only cover small portions of a satellite image pixel. There have been few successful applications of MTMF to fractional area estimation, despite theory that suggests feasibility. We use a ground truth dataset over ten times larger than that available for any previous MTMF application to estimate the bias between ground truth data and matched filter results. We find that the MTMF algorithm underestimates the fractional area of vegetation by 5-10%, and calculate that averaging over 20 to 30 pixels is necessary to correct this bias. We conclude that with a large ground truth dataset, using MTMF for fractional area estimation is possible when results can be estimated at a lower spatial resolution than the base image. When this method is applied to estimating vegetation area in Las Vegas, NV spatial and temporal trends are consistent with expectations from known population growth and policy goals.
Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm
NASA Astrophysics Data System (ADS)
Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.
2017-09-01
Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
SAGE to examine Earth's stratosphere
NASA Technical Reports Server (NTRS)
1979-01-01
The SAGE mission is discussed along with the role of the Nimbus 7 experiment. Other topics discussed include: ground truth measurements, data collection and processing, SAGE instrumentation, and launch sequence.
Ground-Truthing of Airborne LiDAR Using RTK-GPS Surveyed Data in Coastal Louisiana's Wetlands
NASA Astrophysics Data System (ADS)
Lauve, R. M.; Alizad, K.; Hagen, S. C.
2017-12-01
Airborne LiDAR (Light Detection and Ranging) data are used by engineers and scientists to create bare earth digital elevation models (DEM), which are essential to modeling complex coastal, ecological, and hydrological systems. However, acquiring accurate bare earth elevations in coastal wetlands is difficult due to the density of marsh grasses that prevent the sensors reflection off the true ground surface. Previous work by Medeiros et al. [2015] developed a technique to assess LiDAR error and adjust elevations according to marsh vegetation density and index. The aim of this study is the collection of ground truth points and the investigation on the range of potential errors found in existing LiDAR datasets within coastal Louisiana's wetlands. Survey grids were mapped out in an area dominated by Spartina alterniflora and a survey-grade Trimble Real Time Kinematic (RTK) GPS device was employed to measure bare earth ground elevations in the marsh system adjacent to Terrebonne Bay, LA. Elevations were obtained for 20 meter-spaced surveyed grid points and were used to generate a DEM. The comparison between LiDAR derived and surveyed data DEMs yield an average difference of 23 cm with a maximum difference of 68 cm. Considering the local tidal range of 45 cm, these differences can introduce substantial error when the DEM is used for ecological modeling [Alizad et al., 2016]. Results from this study will be further analyzed and implemented in order to adjust LiDAR-derived DEMs closer to their true elevation across Louisiana's coastal wetlands. ReferencesAlizad, K., S. C. Hagen, J. T. Morris, S. C. Medeiros, M. V. Bilskie, and J. F. Weishampel (2016), Coastal wetland response to sea-level rise in a fluvial estuarine system, Earth's Future, 4(11), 483-497, 10.1002/2016EF000385. Medeiros, S., S. Hagen, J. Weishampel, and J. Angelo (2015), Adjusting Lidar-Derived Digital Terrain Models in Coastal Marshes Based on Estimated Aboveground Biomass Density, Remote Sensing, 7(4), 3507-3525, 10.3390/rs70403507.
Identification, definition and mapping of terrestrial ecosystems in interior Alaska
NASA Technical Reports Server (NTRS)
Anderson, J. H. (Principal Investigator)
1973-01-01
The author has identified the following significant results. A transect of the Tanana River Flats to Murphy Dome, Alaska was accomplished. The transect includes an experimental forest and information on the range of vegetation-land form types. Multispectral black and white prints of the Eagle Summit Research Area, Alaska, were studied in conjunction with aerial photography and field notes to determine the characteristics of the vegetation. Black and white MSS prints were compared with aerial photographs of the village of Wiseman, Alaska. No positive identifications could be made without reference to aerial photographs or ground truth data. Color coded density slice scenes of the Eagle Summit Research Area were produced from black and white NASA aerial photographs. Infestations of the spruce beetle in the Cook Inlet, Alaska, were studied using aerial photographs.
Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS)
Fox, N.; Aiken, J.; Barnett, J.J.; Briottet, X.; Carvell, R.; Frohlich, C.; Groom, S.B.; Hagolle, O.; Haigh, J.D.; Kieffer, H.H.; Lean, J.; Pollock, D.B.; Quinn, T.; Sandford, M.C.W.; Schaepman, M.; Shine, K.P.; Schmutz, W.K.; Teillet, P.M.; Thome, K.J.; Verstraete, M.M.; Zalewski, E.; ,
2002-01-01
The Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission offers a novel approach to the provision of key scientific data with unprecedented radiometric accuracy for Earth Observation (EO) and solar studies, which will also establish well-calibrated reference targets/standards to support other EO missions. This paper will present the TRUTHS mission and its objectives. TRUTHS will be the first satellite mission to calibrate its instrumentation directly to SI in orbit, overcoming the usual uncertainties associated with drifts of sensor gain and spectral shape by using an electrical rather than an optical standard as the basis of its calibration. The range of instruments flown as part of the payload will also provide accurate input data to improve atmospheric radiative transfer codes by anchoring boundary conditions, through simultaneous measurements of aerosols, particulates and radiances at various heights. Therefore, TRUTHS will significantly improve the performance and accuracy of Earth observation missions with broad global or operational aims, as well as more dedicated missions. The provision of reference standards will also improve synergy between missions by reducing errors due to different calibration biases and offer cost reductions for future missions by reducing the demands for on-board calibration systems. Such improvements are important for the future success of strategies such as Global Monitoring for Environment and Security (GMES) and the implementation and monitoring of international treaties such as the Kyoto Protocol. TRUTHS will achieve these aims by measuring the geophysical variables of solar and lunar irradiance, together with both polarised and un-polarised spectral radiance of the Moon, and the Earth and its atmosphere.
Traceable Radiometry Underpinning Terrestrial - and Helio- Studies (TRUTHS)
Fox, N.; Aiken, J.; Barnett, J.J.; Briottet, X.; Carvell, R.; Frohlich, C.; Groom, S.B.; Hagolle, O.; Haigh, J.D.; Kieffer, H.H.; Lean, J.; Pollock, D.B.; Quinn, T.; Sandford, M.C.W.; Schaepman, M.; Shine, K.P.; Schmutz, W.K.; Teillet, P.M.; Thome, K.J.; Verstraete, M.M.; Zalewski, E.
2003-01-01
The Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission offers a novel approach to the provision of key scientific data with unprecedented radiometric accuracy for Earth Observation (EO) and solar studies, which will also establish well-calibrated reference targets/standards to support other EO missions. This paper presents the TRUTHS mission and its objectives. TRUTHS will be the first satellite mission to calibrate its EO instrumentation directly to SI in orbit, overcoming the usual uncertainties associated with drifts of sensor gain and spectral shape by using an electrical rather than an optical standard as the basis of its calibration. The range of instruments flown as part of the payload will also provide accurate input data to improve atmospheric radiative transfer codes by anchoring boundary conditions, through simultaneous measurements of aerosols, particulates and radiances at various heights. Therefore, TRUTHS will significantly improve the performance and accuracy of EO missions with broad global or operational aims, as well as more dedicated missions. The provision of reference standards will also improve synergy between missions by reducing errors due to different calibration biases and offer cost reductions for future missions by reducing the demands for on-board calibration systems. Such improvements are important for the future success of strategies such as Global Monitoring for Environment and Security (GMES) and the implementation and monitoring of international treaties such as the Kyoto Protocol. TRUTHS will achieve these aims by measuring the geophysical variables of solar and lunar irradiance, together with both polarised and unpolarised spectral radiance of the Moon, Earth and its atmosphere. Published by Elsevier Ltd of behalf of COSPAR.
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
NASA Technical Reports Server (NTRS)
Donovan, T. J.; Termain, P. A.; Henry, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The Cement oil field, Oklahoma, was a test site for an experiment designed to evaluate LANDSAT's capability to detect an alteration zone in surface rocks caused by hydrocarbon microseepage. Loss of iron and impregnation of sandstone by carbonate cements and replacement of gypsum by calcite were the major alteration phenomena at Cement. The bedrock alterations were partially masked by unaltered overlying beds, thick soils, and dense natural and cultivated vegetation. Interpreters, biased by detailed ground truth, were able to map the alteration zone subjectively using a magnified, filtered, and sinusoidally stretched LANDSAT composite image; other interpreters, unbiased by ground truth data, could not duplicate that interpretation.
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Design of the primary pre-TRMM and TRMM ground truth site
NASA Technical Reports Server (NTRS)
Garstang, Michael
1988-01-01
The primary objective of the Tropical Rain Measuring Mission (TRMM) were to: integrate the rain gage measurements with radar measurements of rainfall using the KSFC/Patrick digitized radar and associated rainfall network; delineate the major rain bearing systems over Florida using the Weather Service reported radar/rainfall distributions; combine the integrated measurements with the delineated rain bearing systems; use the results of the combined measurements and delineated rain bearing systems to represent patterns of rainfall which actually exist and contribute significantly to the rainfall to test sampling strategies and based on the results of these analyses decide upon the ground truth network; and complete the design begun in Phase 1 of a multi-scale (space and time) surface observing precipitation network centered upon KSFC. Work accomplished and in progress is discussed.
Assessment of atmospheric models for tele-infrasonic propagation
NASA Astrophysics Data System (ADS)
McKenna, Mihan; Hayek, Sylvia
2005-04-01
Iron mines in Minnesota are ideally located to assess the accuracy of available atmospheric profiles used in infrasound modeling. These mines are located approximately 400 km away to the southeast (142) of the Lac-Du-Bonnet infrasound station, IS-10. Infrasound data from June 1999 to March 2004 was analyzed to assess the effects of explosion size and atmospheric conditions on observations. IS-10 recorded a suite of events from this time period resulting in well constrained ground truth. This ground truth allows for the comparison of ray trace and PE (Parabolic Equation) modeling to the observed arrivals. The tele-infrasonic distance (greater than 250 km) produces ray paths that turn in the upper atmosphere, the thermosphere, at approximately 120 km to 140 km. Modeling based upon MSIS/HWM (Mass Spectrometer Incoherent Scatter/Horizontal Wind Model) and the NOGAPS (Navy Operational Global Atmospheric Prediction System) and NRL-GS2 (Naval Research Laboratory Ground to Space) augmented profiles are used to interpret the observed arrivals.
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.
1977-01-01
A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.
The Traceable Radiometry Underpinning Terrestrial and Helio Studies (TRUTHS) mission
NASA Astrophysics Data System (ADS)
Green, Paul D.; Fox, Nigel P.; Lobb, Daniel; Friend, Jonathan
2015-10-01
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio-Studies) is a proposed small satellite mission to enable a space-based climate observing system capable of delivering data of the quality needed to provide the information needed by policy makers to make robust mitigation and adaptation decisions. This is achieved by embedding trust and confidence in the data and derived information (tied to international standards) from both its own measurements and by upgrading the performance and interoperability of other EO platforms, such as the Sentinels by in-flight reference calibration. TRUTHS would provide measurements of incoming (total and spectrally resolved) and global reflected spectrally and spatially (50 m) solar radiation at the 0.3% uncertainty level. These fundamental climate data products can be convolved into the building blocks for many ECVs and EO applications as envisaged by the 2015 ESA science strategy; in a cost effective manner. We describe the scientific drivers for the TRUTHS mission and how the requirements for the climate benchmarking and cross-calibration reference sensor are both complementary and simply implemented, with a small additional complexity on top of heritage calibration schemes. The calibration scheme components and the route to SI-traceable Earth-reflected solar spectral radiance and solar spectral irradiance are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porras-Chaverri, M; University of Costa Rica, San Jose; Galavis, P
Purpose: Evaluate mammographic mean glandular dose (MGD) coefficients for particular known tissue distributions using a novel formalism that incorporates the effect of the heterogeneous glandular tissue distribution, by comparing them with MGD coefficients derived from the corresponding anthropomorphic computer breast phantom. Methods: MGD coefficients were obtained using MCNP5 simulations with the currently used homogeneous assumption and the heterogeneously-layered breast (HLB) geometry and compared against those from the computer phantom (ground truth). The tissue distribution for the HLB geometry was estimated using glandularity map image pairs corrected for the presence of non-glandular fibrous tissue. Heterogeneity of tissue distribution was quantified usingmore » the glandular tissue distribution index, Idist. The phantom had 5 cm compressed breast thickness (MLO and CC views) and 29% whole breast glandular percentage. Results: Differences as high as 116% were found between the MGD coefficients with the homogeneous breast core assumption and those from the corresponding ground truth. Higher differences were found for cases with more heterogeneous distribution of glandular tissue. The Idist for all cases was in the [−0.8{sup −}+0.3] range. The use of the methods presented in this work results in better agreement with ground truth with an improvement as high as 105 pp. The decrease in difference across all phantom cases was in the [9{sup −}105] pp range, dependent on the distribution of glandular tissue and was larger for the cases with the highest Idist values. Conclusion: Our results suggest that the use of corrected glandularity image pairs, as well as the HLB geometry, improves the estimates of MGD conversion coefficients by accounting for the distribution of glandular tissue within the breast. The accuracy of this approach with respect to ground truth is highly dependent on the particular glandular tissue distribution studied. Predrag Bakic discloses current funding from NIH, NSF, and DoD, former funding from Real Time Tomography, LLC and a current research collaboration with Barco and Hologic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; Levesque, I R.; Larkin, J
Purpose: To produce multi-modality compatible, realistic datasets for the joint evaluation of segmentation and registration with a reliable ground truth using a 4D biomechanical lung phantom. The further development of a computer controlled air flow system for recreation of real patient breathing patterns is incorporated for additional evaluation of motion prediction algorithms. Methods: A pair of preserved porcine lungs was pneumatically manipulated using an in-house computer controlled respirator. The respirator consisted of a set of bellows actuated by a 186 W computer controlled industrial motor. Patient breathing traces were recorded using a respiratory bellows belt during CT simulation and inputmore » into a control program incorporating a proportional-integral-derivative (PID) feedback controller in LabVIEW. Mock tumors were created using dual compartment vacuum sealed sea sponges. 65% iohexol,a gadolinium-based contrast agent and 18F-FDG were used to produce contrast and thus determine a segmentation ground truth. The intensity distributions of the compartments were then digitally matched for the final dataset. A bifurcation tracking pipeline provided a registration ground truth using the bronchi of the lung. The lungs were scanned using a GE Discovery-ST PET/CT scanner and a Phillips Panorama 0.23T MRI using a T1 weighted 3D fast field echo (FFE) protocol. Results: The standard deviation of the error between the patient breathing trace and the encoder feedback from the respirator was found to be ±4.2%. Bifurcation tracking error using CT (0.97×0.97×3.27 mm{sup 3} resolution) was found to be sub-voxel up to 7.8 cm displacement for human lungs and less than 1.32 voxel widths in any axis up to 2.3 cm for the porcine lungs. Conclusion: An MRI/PET/CT compatible anatomically and temporally realistic swine lung phantom was developed for the evaluation of simultaneous registration and segmentation algorithms. With the addition of custom software and mock tumors, the entire package offers ground truths for benchmarking performance with high fidelity.« less
Detection of surface temperature from LANDSAT-7/ETM+
NASA Astrophysics Data System (ADS)
Suga, Y.; Ogawa, H.; Ohno, K.; Yamada, K.
2003-12-01
Hiroshima Institute of Technology (HIT) in Japan has established a LANDSAT-7 Ground Station in cooperation with NASDA for receiving and processing the ETM+ data on March 15 th, 2000 in Japan. The authors performed a verification study on the surface temperature derived from thermal infrared band image data of LANDSAT 7/Enhanced Thematic Mapper Plus (ETM+) for the estimation of temperatures around Hiroshima city and bay area in the western part of Japan as a test site. As to the thermal infrared band, the approximate functions for converting the spectral radiance into the surface temperature are estimated by considering both typical surface temperatures measured by the simultaneous field survey with the satellite observation and the spectral radiance observed by ETM+ band 6 (10.40-12.50μm), and then the estimation of the surface temperature distribution around the test site was examined.In this study, the authors estimated the surface temperature distribution equivalent to the land cover categories around the test site for establishing a guideline of surface temperature detection by LANDSAT7/ETM+ data. As the result of comparison of the truth data and the estimated surface temperature, the correlation coefficients of the approximate function referred to the truth data are from 0.9821 to 0.9994, and the differences are observed from +0.7 to -1.5°C in summer, from +0.4 to -0.9 *C in autumn, from -1.6 to -3.4°C in winter and from +0.5 to -0.5C in spring season respectively. It is clearly found that the estimation of surface temperature based on the approximate functions for converting the spectral radiance into the surface temperature referred to the truth data is improved over the directly estimated surface temperature obtained from satellite data. Finally, the successive seasonal change of surface temperature distribution pattern of the test site is precisely detected with the temperature legend of 0 to 80'C derived from LANDSAT-7/ETM+ band 6 image data for the thermal environment monitoring. 2003 COSPAR. Published by Elsevier Ltd.
The use of the truth and deception in dementia care amongst general hospital staff.
Turner, Alex; Eccles, Fiona; Keady, John; Simpson, Jane; Elvish, Ruth
2017-08-01
Deceptive practice has been shown to be endemic in long-term care settings. However, little is known about the use of deception in dementia care within general hospitals and staff attitudes towards this practice. This study aimed to develop understanding of the experiences of general hospital staff and explore their decision-making processes when choosing whether to tell the truth or deceive a patient with dementia. This qualitative study drew upon a constructivist grounded theory approach to analyse data gathered from semi-structured interviews with a range of hospital staff. A model, grounded in participant experiences, was developed to describe their decision-making processes. Participants identified particular triggers that set in motion the need for a response. Various mediating factors influenced how staff chose to respond to these triggers. Overall, hospital staff were reluctant to either tell the truth or to lie to patients. Instead, 'distracting' or 'passing the buck' to another member of staff were preferred strategies. The issue of how truth and deception are defined was identified. The study adds to the growing research regarding the use of lies in dementia care by considering the decision-making processes for staff in general hospitals. Various factors influence how staff choose to respond to patients with dementia and whether deception is used. Similarities and differences with long-term dementia care settings are discussed. Clinical and research implications include: opening up the topic for further debate, implementing staff training about communication and evaluating the impact of these processes.
Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid
2018-01-01
Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS) and ground-truth (Point-Centred Quarter Method—PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present. PMID:29479500
Consensus-Based Sorting of Neuronal Spike Waveforms
Fournier, Julien; Mueller, Christian M.; Shein-Idelson, Mark; Hemberger, Mike
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained “ground truth” data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data. PMID:27536990
As-built design specification for MISMAP
NASA Technical Reports Server (NTRS)
Brown, P. M.; Cheng, D. E.; Tompkins, M. A. (Principal Investigator)
1981-01-01
The MISMAP program, which is part of the CLASFYT package, is described. The program is designed to compare classification values with ground truth values for a segment and produce a comparison map and summary table.
Ground truth report 1975 Phoenix microwave experiment. [Joint Soil Moisture Experiment
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1975-01-01
Direct measurements of soil moisture obtained in conjunction with aircraft data flights near Phoenix, Arizona in March, 1975 are summarized. The data were collected for the Joint Soil Moisture Experiment.
NASA Technical Reports Server (NTRS)
1984-01-01
Three mesoscale sounding data sets from the VISSR Atmospheric Sounder (VAS) produced using different retrieval techniques were evaluated of corresponding ground truth rawinsonde data for 6-7 March 1982. Mean, standard deviations, and RMS differences between the satellite and rawinsonde parameters were calculated over gridded fields in central Texas and Oklahoma. Large differences exist between each satellite data set and the ground truth data. Biases in the satellite temperature and moisture profiles seem extremely dependent upon the three dimensional structure of the atmosphere and range from 1 deg to 3 deg C for temperature and 3 deg to 6 deg C for dewpoint temperature. Atmospheric gradients of basic and derived parameters determined from the VAS data sets produced an adequate representation of the mesoscale environment but their magnitudes were often reduced by 30 to 50%.
A three-part geometric model to predict the radar backscatter from wheat, corn, and sorghum
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Eger, G. W., III; Kanemasu, E. T.
1982-01-01
A model to predict the radar backscattering coefficient from crops must include the geometry of the canopy. Radar and ground-truth data taken on wheat in 1979 indicate that the model must include contributions from the leaves, from the wheat head, and from the soil moisture. For sorghum and corn, radar and ground-truth data obtained in 1979 and 1980 support the necessity of a soil moisture term and a leaf water term. The Leaf Area Index (LAI) is an appropriate input for the leaf contribution to the radar response for wheat and sorghum, however the LAI generates less accurate values for the backscattering coefficient for corn. Also, the data for corn and sorghum illustrate the importance of the water contained in the stalks in estimating the radar response.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Using Seismic and Infrasonic Data to Identify Persistent Sources
NASA Astrophysics Data System (ADS)
Nava, S.; Brogan, R.
2014-12-01
Data from seismic and infrasound sensors were combined to aid in the identification of persistent sources such as mining-related explosions. It is of interest to operators of seismic networks to identify these signals in their event catalogs. Acoustic signals below the threshold of human hearing, in the frequency range of ~0.01 to 20 Hz are classified as infrasound. Persistent signal sources are useful as ground truth data for the study of atmospheric infrasound signal propagation, identification of manmade versus naturally occurring seismic sources, and other studies. By using signals emanating from the same location, propagation studies, for example, can be conducted using a variety of atmospheric conditions, leading to improvements to the modeling process for eventual use where the source is not known. We present results from several studies to identify ground truth sources using both seismic and infrasound data.
NASA Technical Reports Server (NTRS)
Russell, P. B. (Editor); Cunnold, D. M.; Grams, G. W.; Laver, J.; Mccormick, M. P.; Mcmaster, L. R.; Murcray, D. G.; Pepin, T. J.; Perry, T. W.; Planet, W. G.
1979-01-01
The ground truth plan is outlined for correlative measurements to validate the Stratospheric Aerosol and Gas Experiment (SAGE) sensor data. SAGE will fly aboard the Applications Explorer Mission-B satellite scheduled for launch in early 1979 and measure stratospheric vertical profiles of aerosol, ozone, nitrogen dioxide, and molecular extinction between 79 N and 79 S. latitude. The plan gives details of the location and times for the simultaneous satellite/correlative measurements for the nominal launch time, the rationale and choice of the correlative sensors, their characteristics and expected accuracies, and the conversion of their data to extinction profiles. In addition, an overview of the SAGE expected instrument performance and data inversion results are presented. Various atmospheric models representative of stratospheric aerosols and ozone are used in the SAGE and correlative sensor analyses.
Comparing Eyewitness-Derived Trajectories of Bright Meteors to Ground Truth Data
NASA Technical Reports Server (NTRS)
Moser, D. E.
2016-01-01
The NASA Meteoroid Environment Office is a US government agency tasked with analyzing meteors of public interest. When queried about a meteor observed over the United States, the MEO must respond with a characterization of the trajectory, orbit, and size within a few hours. If the event is outside meteor network coverage and there is no imagery recorded by the public, a timely assessment can be difficult if not impossible. In this situation, visual reports made by eyewitnesses may be the only resource available. This has led to the development of a tool to quickly calculate crude meteor trajectories from eyewitness reports made to the American Meteor Society. A description of the tool, example case studies, and a comparison to ground truth data observed by the NASA All Sky Fireball Network are presented.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
NASA Technical Reports Server (NTRS)
Graves, D. H.
1975-01-01
Research efforts are presented for the use of remote sensing in environmental surveys in Kentucky. Ground truth parameters were established that represent the vegetative cover of disturbed and undisturbed watersheds in the Cumberland Plateau of eastern Kentucky. Several water quality parameters were monitored of the watersheds utilized in the establishment of ground truth data. The capabilities of multistage-multispectral aerial photography and satellite imagery were evaluated in detecting various land use practices. The use of photographic signatures of known land use areas utilizing manually-operated spot densitometers was studied. The correlation of imagery signature data to water quality data was examined. Potential water quality predictions were developed from forested and nonforested watersheds based upon the above correlations. The cost effectiveness of predicting water quality values was evaluated using multistage and satellite imagery sampling techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; Levesque, I R; Research Institute of the McGill University Health Centre, Montreal, QC
Segmentation and registration of medical imaging data are two processes that can be integrated (a process termed regmentation) to iteratively reinforce each other, potentially improving efficiency and overall accuracy. A significant challenge is presented when attempting to validate the joint process particularly with regards to minimizing geometric uncertainties associated with the ground truth while maintaining anatomical realism. This work demonstrates a 4D MRI, PET, and CT compatible tissue phantom with a known ground truth for evaluating registration and segmentation accuracy. The phantom consists of a preserved swine lung connected to an air pump via a PVC tube for inflation. Mockmore » tumors were constructed from sea sponges contained within two vacuum-sealed compartments with catheters running into each one for injection of radiotracer solution. The phantom was scanned using a GE Discovery-ST PET/CT scanner and a 0.23T Phillips MRI, and resulted in anatomically realistic images. A bifurcation tracking algorithm was implemented to provide a ground truth for evaluating registration accuracy. This algorithm was validated using known deformations of up to 7.8 cm using a separate CT scan of a human thorax. Using the known deformation vectors to compare against, 76 bifurcation points were selected. The tracking accuracy was found to have maximum mean errors of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior directions, respectively. A pneumatic control system is under development to match the respiratory profile of the lungs to a breathing trace from an individual patient.« less
The verification of lightning location accuracy in Finland deduced from lightning strikes to trees
NASA Astrophysics Data System (ADS)
Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko
2016-05-01
We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Yu, Qingbao; Du, Yuhui; Chen, Jiayu; He, Hao; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D
2017-11-01
A key challenge in building a brain graph using fMRI data is how to define the nodes. Spatial brain components estimated by independent components analysis (ICA) and regions of interest (ROIs) determined by brain atlas are two popular methods to define nodes in brain graphs. It is difficult to evaluate which method is better in real fMRI data. Here we perform a simulation study and evaluate the accuracies of a few graph metrics in graphs with nodes of ICA components, ROIs, or modified ROIs in four simulation scenarios. Graph measures with ICA nodes are more accurate than graphs with ROI nodes in all cases. Graph measures with modified ROI nodes are modulated by artifacts. The correlations of graph metrics across subjects between graphs with ICA nodes and ground truth are higher than the correlations between graphs with ROI nodes and ground truth in scenarios with large overlapped spatial sources. Moreover, moving the location of ROIs would largely decrease the correlations in all scenarios. Evaluating graphs with different nodes is promising in simulated data rather than real data because different scenarios can be simulated and measures of different graphs can be compared with a known ground truth. Since ROIs defined using brain atlas may not correspond well to real functional boundaries, overall findings of this work suggest that it is more appropriate to define nodes using data-driven ICA than ROI approaches in real fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, Chia-Hao; Chu, Tzu-How
2017-04-01
To control the rice production and farm usage in Taiwan, Agriculture and Food Agency (AFA) has published a series of policies to subsidize farmers to plant different crops or to practice fallow science 1983. Because of no efficient and examinable mechanism to verify the fallow fields surveyed by township office, illegal fallow fields were still repeated each year. In this research, we used remote sensing images, GIS data of Fields, and application records of fallow fields to establish an illegal fallow fields detecting method in Yulin County in central Taiwan. This method included: 1. collected multi-temporal images from FS-2 or SPOT series with 4 time periods; 2. combined the application records and GIS data of fields to verify the location of fallow fields; 3. conducted ground truth survey and classified images with ISODATA and Maximum Likelihood Classification (MLC); 4. defined the land cover type of fallow fields by zonal statistic; 5. verified accuracy with ground truth; 6. developed potential illegal fallow fields survey method and benefit estimation. We use 190 fallow fields with 127 legal and 63 illegal as ground truth and accuracies of illegal fallow field interpretation in producer and user are 71.43% and 38.46%. If township office surveyed 117 classified illegal fallow fields, 45 of 63 illegal fallow fields will be detected. By using our method, township office can save 38.42% of the manpower to detect illegal fallow fields and receive an examinable 71.43% producer accuracy.
Isolated effect of geometry on mitral valve function for in silico model development.
Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj
2015-01-01
Computational models for the heart's mitral valve (MV) exhibit several uncertainties that may be reduced by further developing these models using ground-truth data-sets. This study generated a ground-truth data-set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle (PM) displacement and asymmetric PM displacement on leaflet coaptation, mitral regurgitation (MR) and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile haemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction and anterior leaflet strain in the radial and circumferential directions were successfully quantified at increasing levels of geometric distortion. From these data, increase in the levels of isolated PM displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase) and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric PM displacement. Asymmetric PM displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data-set may be used to parametrically evaluate and develop modelling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools.
Local involvement in measuring and governing carbon stocks in China, Vietnam, Indonesia and Laos
Michael Køie Poulsen
2013-01-01
An important element of MRV is to ensure accurate measurements of carbon stocks. Measuring trees on the ground may be needed for ground truthing of remote sensing results. It can also provide more accurate carbon stock monitoring than remote sensing alone. Local involvement in measuring trees for monitoring of carbon stocks may be advantageous in several ways....
The Differences in Preference for Truth-telling of Patients With Cancer of Different Genders.
Chen, Shih-Ying; Wang, Hung-Ming; Tang, Woung-Ru
Patients' personality traits, especially age, gender, and cancer stage, tend to affect doctors' truth-telling methods. However, there is a lack of studies investigating the influence of patients' gender on truth-telling, especially for Asian cultures. The aims of this study were to qualitatively investigate the differences in preferences for truth-telling for patients with cancer of different genders and explore patients' preferences for decision making. For this descriptive qualitative study, in-depth interviews were conducted with 20 patients with cancer (10 men and 10 women) using a semistructured interview guide. All interviews were audiotaped and transcribed verbatim. Data collection and analysis occurred concurrently; content analysis developed categories and themes. Data analysis revealed 2 themes: (1) similar gender preferences for truth-telling and decision making: knowledge of their medical condition, direct and frank truthfulness, and assistance in decision making for subsequent treatment programs, and (2) preferences in truth-telling that differed by gender: women wanted family members present for confirmation of diagnosis, whereas men did not; men preferred truth-telling for only key points of their cancer, whereas women wanted detailed information; and men did not want to know their survival period, whereas women wanted this information. Our study revealed similar gender preferences for truth-telling regarding knowledge and decision making; however, preferences differed for family support, scope of information, and survival time. These findings can serve as a reference for nurses and other healthcare personnel when implementing truth-telling for patients given a diagnosis of cancer. Strategies can be targeted for specific preferences of men and women.
NASA Astrophysics Data System (ADS)
Näthe, Paul; Becker, Rolf
2014-05-01
Soil moisture and plant available water are important environmental parameters that affect plant growth and crop yield. Hence, they are significant parameters for vegetation monitoring and precision agriculture. However, validation through ground-based soil moisture measurements is necessary for accessing soil moisture, plant canopy temperature, soil temperature and soil roughness with airborne hyperspectral imaging systems in a corresponding hyperspectral imaging campaign as a part of the INTERREG IV A-Project SMART INSPECTORS. At this point, commercially available sensors for matric potential, plant available water and volumetric water content are utilized for automated measurements with smart sensor nodes which are developed on the basis of open-source 868MHz radio modules, featuring a full-scale microcontroller unit that allows an autarkic operation of the sensor nodes on batteries in the field. The generated data from each of these sensor nodes is transferred wirelessly with an open-source protocol to a central node, the so-called "gateway". This gateway collects, interprets and buffers the sensor readings and, eventually, pushes the data-time series onto a server-based database. The entire data processing chain from the sensor reading to the final storage of data-time series on a server is realized with open-source hardware and software in such a way that the recorded data can be accessed from anywhere through the internet. It will be presented how this open-source based wireless sensor network is developed and specified for the application of ground truthing. In addition, the system's perspectives and potentials with respect to usability and applicability for vegetation monitoring and precision agriculture shall be pointed out. Regarding the corresponding hyperspectral imaging campaign, results from ground measurements will be discussed in terms of their contributing aspects to the remote sensing system. Finally, the significance of the wireless sensor network for the application of ground truthing shall be determined.
Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M
2011-01-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140
NASA Astrophysics Data System (ADS)
Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.
2011-07-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.
NASA Astrophysics Data System (ADS)
Dalezios, Nicolas; Spyropoulos, Nicos V.; Tarquis, Ana M.
2015-04-01
The research work stems from the hypothesis that it is possible to perform an estimation of seasonal water needs of olive tree farms under drought periods by cross correlating high spatial, spectral and temporal resolution (~monthly) of satellite data, acquired at well defined time intervals of the phenological cycle of crops, with ground-truth information simultaneously applied during the image acquisitions. The present research is for the first time, demonstrating the coordinated efforts of space engineers, satellite mission control planners, remote sensing scientists and ground teams to record at specific time intervals of the phenological cycle of trees from ground "zero" and from 770 km above the Earth's surface, the status of plants for subsequent cross correlation and analysis regarding the estimation of the seasonal evapotranspiration in vulnerable agricultural environment. The ETo and ETc derived by Penman-Montieth equation and reference Kc tables, compared with new ETd using the Kc extracted from the time series satellite data. Several vegetation indices were also used especially the RedEdge and the chlorophyll one based on WorldView-2 RedEdge and second NIR bands to relate the tree status with water and nutrition needs. Keywords: Evapotransipration, Very High Spatial Resolution - VHSR, time series, remote sensing, vulnerability, agriculture, vegetation indeces.
Finding Common Ground Between Earth Scientists and Evangelical Christians
NASA Astrophysics Data System (ADS)
Grant Ludwig, L.
2015-12-01
In recent decades there has been some tension between earth scientists and evangelical Christians in the U.S., and this tension has spilled over into the political arena and policymaking on important issues such as climate change. From my personal and professional experience engaging with both groups, I find there is much common ground for increasing understanding and communicating the societal relevance of earth science. Fruitful discussions can arise from shared values and principles, and common approaches to understanding the world. For example, scientists and Christians are engaged in the pursuit of truth, and they value moral/ethical decision-making based on established principles. Scientists emphasize the benefits of research "for the common good" while Christians emphasize the value of doing "good works". Both groups maintain a longterm perspective: Christians talk about "the eternal" and geologists discuss "deep time". Both groups understand the importance of placing new observations in context of prior understanding: scientists diligently reference "the literature" while Christians quote "chapter and verse". And members of each group engage with each other in "fellowship" or "meetings" to create a sense of community and reinforce shared values. From my perspective, earth scientists can learn to communicate the importance and relevance of science more effectively by engaging with Christians in areas of common ground, rather than by trying to win arguments or debates.
Using Social Media to Identify Sources of Healthy Food in Urban Neighborhoods.
Gomez-Lopez, Iris N; Clarke, Philippa; Hill, Alex B; Romero, Daniel M; Goodspeed, Robert; Berrocal, Veronica J; Vinod Vydiswaran, V G; Veinot, Tiffany C
2017-06-01
An established body of research has used secondary data sources (such as proprietary business databases) to demonstrate the importance of the neighborhood food environment for multiple health outcomes. However, documenting food availability using secondary sources in low-income urban neighborhoods can be particularly challenging since small businesses play a crucial role in food availability. These small businesses are typically underrepresented in national databases, which rely on secondary sources to develop data for marketing purposes. Using social media and other crowdsourced data to account for these smaller businesses holds promise, but the quality of these data remains unknown. This paper compares the quality of full-line grocery store information from Yelp, a crowdsourced content service, to a "ground truth" data set (Detroit Food Map) and a commercially-available dataset (Reference USA) for the greater Detroit area. Results suggest that Yelp is more accurate than Reference USA in identifying healthy food stores in urban areas. Researchers investigating the relationship between the nutrition environment and health may consider Yelp as a reliable and valid source for identifying sources of healthy food in urban environments.
Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe
2012-09-01
In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological (18)F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of (18)F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of (18)F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax-abdomen phantom and imaged with a PET-CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites' PET threshold segmentations in terms of Dice index and volume error. The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R(2) = 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the (18)F-FDG solution is able to saturate the zeolite pores and that the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.
As-built design specification for PARCLS
NASA Technical Reports Server (NTRS)
Tompkins, M. A. (Principal Investigator)
1981-01-01
The PARCLS program, part of the CLASFYG package, reads a parameter file created by the CLASFYG program and a pure pixel ground truth file in order to create to classification file of three separate crop categories in universal format.
NASA Technical Reports Server (NTRS)
Hart, W. G.; Ingle, S. J.; Davis, M. R.
1975-01-01
The detection of insect infestations and the density and distribution of host plants were studied using Skylab data, aerial photography and ground truth simultaneously. Additional ground truth and aerial photography were acquired between Skylab passes. Three test areas were selected: area 1, of high density citrus, was located northwest of Mission, Texas; area 2, 20 miles north of Weslaco, Texas, irrigated pastures and brush-covered land; area 3 covered the entire Lower Rio Grande Valley and adjacent areas of Mexico. A color composite picture of S-190A data showed patterns of vegetation on both sides of the Rio Grande River clearly delineating the possible avenues of entry of pest insects from Mexico into the United States or from the United States into Mexico. Vegetation that could be identified with conventional color and color IR film included: citrus, brush, sugarcane, alfalfa, irrigated and unimproved pastures.
Modeling Being "Lost": Imperfect Situation Awareness
NASA Technical Reports Server (NTRS)
Middleton, Victor E.
2011-01-01
Being "lost" is an exemplar of imperfect Situation Awareness/Situation Understanding (SA/SU) -- information/knowledge that is uncertain, incomplete, and/or just wrong. Being "lost" may be a geo-spatial condition - not knowing/being wrong about where to go or how to get there. More broadly, being "lost" can serve as a metaphor for uncertainty and/or inaccuracy - not knowing/being wrong about how one fits into a larger world view, what one wants to do, or how to do it. This paper discusses using agent based modeling (ABM) to explore imperfect SA/SU, simulating geo-spatially "lost" intelligent agents trying to navigate in a virtual world. Each agent has a unique "mental map" -- its idiosyncratic view of its geo-spatial environment. Its decisions are based on this idiosyncratic view, but behavior outcomes are based on ground truth. Consequently, the rate and degree to which an agent's expectations diverge from ground truth provide measures of that agent's SA/SU.
NASA Technical Reports Server (NTRS)
Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.
1992-01-01
Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
NASA Technical Reports Server (NTRS)
Goldstein, H. W.; Bortner, M. H.; Grenda, R. N.; Dick, R.; Lebel, P. J.; Lamontagne, R. A.
1976-01-01
Two types of experiments were performed with a correlation interferometer on-board a Bell Jet Ranger 206 Helicopter. The first consisted of simultaneous ground- and air-truth measurements as the instrumented helicopter passed over the Cheverly site. The second consisted of several measurement flights in and around the national capital air quality control region (Washington, D.C.). The correlation interferometer data, the infrared Fourier spectrometer data, and the integrated altitude sampling data showed agreement within the errors of the individual measurements. High values for CO were found from the D.C. flight data to be reproducible and concentrated in areas of stop-and-go traffic. It is concluded, that pollutants at low altitudes are detectable from an air-borne platform by remote correlation interferometry and that the correlation interferometer measurements agree with ground- and air-truth data.
Operation of agricultural test fields for study of stressed crops by remote sensing
NASA Technical Reports Server (NTRS)
Toler, R. W.
1974-01-01
A test site for the study of winter wheat development and collection of ERTS data was established in September of 1973. The test site is a 10 mile square area located 12.5 miles west of Amarillo, Texas on Interstate Hwy. 40, in Randall and Potter counties. The center of the area is the Southwestern Great Plains Research Center at Bushland, Texas. Within the test area all wheat fields were identified by ground truth and designated irrigated or dryland. The fields in the test area other than wheat were identified as to pasture or the crop that was grown. A ground truth area of hard red winter wheat was established west of Hale Center, Texas. Maps showing the location of winter wheat fields in excess of 40 acres in size within a 10 mile radius were supplied NASA. Satellite data was collected for this test site (ERTS-1).
Spectral reflectance measurements of plant soil combinations
NASA Technical Reports Server (NTRS)
Macleod, N. H.
1972-01-01
Field and laboratory observations of plant and soil reflectance spectra were made to develop an understanding of the reflectance of solar energy by plants and soils. A related objective is the isolation of factors contributing to the image formed by multispectral scanners and return beam vidicons carried by ERTS or film-filter combinations used in the field or on aircraft. A set of objective criteria are to be developed for identifying plant and soil types and their changing condition through the seasons for application of space imagery to resource management. This is because the global scale of earth observations satellites requires objective rather than subjective techniques, particularly where ground truth is either not available or too costly to acquire. As the acquiring of ground truth for training sets may be impractical in many cases, attempts have been made to identify objectively standard responses which could be used for image interpretation.
Using remote sensing imagery to monitoring sea surface pollution cause by abandoned gold-copper mine
NASA Astrophysics Data System (ADS)
Kao, H. M.; Ren, H.; Lee, Y. T.
2010-08-01
The Chinkuashih Benshen mine was the largest gold-copper mine in Taiwan before the owner had abandoned the mine in 1987. However, even the mine had been closed, the mineral still interacts with rain and underground water and flowed into the sea. The polluted sea surface had appeared yellow, green and even white color, and the pollutants had carried by the coast current. In this study, we used the optical satellite images to monitoring the sea surface. Several image processing algorithms are employed especial the subpixel technique and linear mixture model to estimate the concentration of pollutants. The change detection approach is also applied to track them. We also conduct the chemical analysis of the polluted water to provide the ground truth validation. By the correlation analysis between the satellite observation and the ground truth chemical analysis, an effective approach to monitoring water pollution could be established.
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
NASA Astrophysics Data System (ADS)
Marzahn, P.; Ludwig, R.
2016-06-01
In this Paper the potential of multi parametric polarimetric SAR (PolSAR) data for soil surface roughness estimation is investigated and its potential for hydrological modeling is evaluated. The study utilizes microwave backscatter collected from the Demmin testsite in the North-East Germany during AgriSAR 2006 campaign using fully polarimetric L-Band airborne SAR data. For ground truthing extensive soil surface roughness in addition to various other soil physical properties measurements were carried out using photogrammetric image matching techniques. The correlation between ground truth roughness indices and three well established polarimetric roughness estimators showed only good results for Re[ρRRLL] and the RMS Height s. Results in form of multitemporal roughness maps showed only satisfying results due to the fact that the presence and development of particular plants affected the derivation. However roughness derivation for bare soil surfaces showed promising results.
Application of selected methods of remote sensing for detecting carbonaceous water pollution
NASA Technical Reports Server (NTRS)
Davis, E. M.; Fosbury, W. J.
1973-01-01
A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.
1976-01-01
The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana
2004-05-01
Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be detected accurately. This will be an important step towards automatic multiple image analysis for CAD techniques.
Adopting a constructivist approach to grounded theory: implications for research design.
Mills, Jane; Bonner, Ann; Francis, Karen
2006-02-01
Grounded theory is a popular research methodology that is evolving to account for a range of ontological and epistemological underpinnings. Constructivist grounded theory has its foundations in relativism and an appreciation of the multiple truths and realities of subjectivism. Undertaking a constructivist enquiry requires the adoption of a position of mutuality between researcher and participant in the research process, which necessitates a rethinking of the grounded theorist's traditional role of objective observer. Key issues for constructivist grounded theorists to consider in designing their research studies are discussed in relation to developing a partnership with participants that enables a mutual construction of meaning during interviews and a meaningful reconstruction of their stories into a grounded theory model.
Ground-Truthing a Next Generation Snow Radar
NASA Astrophysics Data System (ADS)
Yan, S.; Brozena, J. M.; Gogineni, P. S.; Abelev, A.; Gardner, J. M.; Ball, D.; Liang, R.; Newman, T.
2016-12-01
During the early spring of 2016 the Naval Research Laboratory (NRL) performed a test of a next generation airborne snow radar over ground truth data collected on several areas of fast ice near Barrow, AK. The radar was developed by the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas, and includes several improvements compared to their previous snow radar. The new unit combines the earlier Ku-band and snow radars into a single unit with an operating frequency spanning the entire 2-18 GHz, an enormous bandwidth which provides the possibility of snow depth measurements with 1.5 cm range resolution. Additionally, the radar transmits on dual polarizations (H and V), and receives the signal through two orthogonally polarized Vivaldi arrays, each with 128 phase centers. The 8 sets of along-track phase centers are combined in hardware to improve SNR and narrow the beamwidth in the along-track, resulting in 8 cross-track effective phase centers which are separately digitized to allow for beam sharpening and forming in post-processing. Tilting the receive arrays 30 degrees from the horizontal also allows the formation of SAR images and the potential for estimating snow-water equivalent (SWE). Ground truth data (snow depth, density, salinity and SWE) were collected over several 60 m wide swaths that were subsequently overflown with the snow radar mounted on a Twin Otter. The radar could be operated in nadir (by beam steering the receive antennas to point beneath the aircraft) or side-looking modes. Results from the comparisons will be shown.
Automated microwave ablation therapy planning with single and multiple entry points
NASA Astrophysics Data System (ADS)
Liu, Sheena X.; Dalal, Sandeep; Kruecker, Jochen
2012-02-01
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment. Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin. We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy and robustness of the described planning algorithm.
Isolated Effect of Geometry on Mitral Valve Function for In-Silico Model Development
Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj
2013-01-01
Computational models for the heart’s mitral valve (MV) exhibit several uncertainties which may be reduced by further developing these models using ground-truth data sets. The present study generated a ground-truth data set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle displacement, and asymmetric papillary muscle displacement on leaflet coaptation, mitral regurgitation (MR), and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile hemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction, and anterior leaflet strain in the radial and circumferential directions were successfully quantified for increasing levels of geometric distortion. From these data, increasing levels of isolated papillary muscle displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase), and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric papillary muscle displacement. Asymmetric papillary muscle displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data may be used to parametrically evaluate and develop modeling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools. PMID:24059354
Parks, Connie L; Monson, Keith L
2018-05-01
This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.
Building confidence and credibility into CAD with belief decision trees
NASA Astrophysics Data System (ADS)
Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.
2017-03-01
Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.
Evaluation of dual-loop data accuracy using video ground truth data
DOT National Transportation Integrated Search
2002-01-01
Washington State Department of Transportation (WSDOT) initiated a : research project entitled Monitoring Freight on Puget Sound Freeways in September : 1999. Dual-loop data from the Seattle area freeway system were selected as the main data : s...
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Wendy; Ren, Lei, E-mail: lei.ren@duke.edu; Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of themore » deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR = 20. For patient data, average tracking errors were less than 2 mm in all directions for all patients. Conclusions: Preliminary studies demonstrated the feasibility of generating real-time VC-MRI for on-board localization of moving targets in radiation therapy.« less
VAS demonstration: (VISSR Atmospheric Sounder) description
NASA Technical Reports Server (NTRS)
Montgomery, H. E.; Uccellini, L. W.
1985-01-01
The VAS Demonstration (VISSR Atmospheric Sounder) is a project designed to evaluate the VAS instrument as a remote sensor of the Earth's atmosphere and surface. This report describes the instrument and ground processing system, the instrument performance, the valiation as a temperature and moisture profiler compared with ground truth and other satellites, and assesses its performance as a valuable meteorological tool. The report also addresses the availability of data for scientific research.
Comparing distinct ground-based lightning location networks covering the Netherlands
NASA Astrophysics Data System (ADS)
de Vos, Lotte; Leijnse, Hidde; Schmeits, Maurice; Beekhuis, Hans; Poelman, Dieter; Evers, Läslo; Smets, Pieter
2015-04-01
Lightning can be detected using a ground-based sensor network. The Royal Netherlands Meteorological Institute (KNMI) monitors lightning activity in the Netherlands with the so-called FLITS-system; a network combining SAFIR-type sensors. This makes use of Very High Frequency (VHF) as well as Low Frequency (LF) sensors. KNMI has recently decided to replace FLITS by data from a sub-continental network operated by Météorage which makes use of LF sensors only (KNMI Lightning Detection Network, or KLDN). KLDN is compared to the FLITS system, as well as Met Office's long-range Arrival Time Difference (ATDnet), which measures Very Low Frequency (VLF). Special focus lies on the ability to detect Cloud to Ground (CG) and Cloud to Cloud (CC) lightning in the Netherlands. Relative detection efficiency of individual flashes and lightning activity in a more general sense are calculated over a period of almost 5 years. Additionally, the detection efficiency of each system is compared to a ground-truth that is constructed from flashes that are detected by both of the other datasets. Finally, infrasound data is used as a fourth lightning data source for several case studies. Relative performance is found to vary strongly with location and time. As expected, it is found that FLITS detects significantly more CC lightning (because of the strong aptitude of VHF antennas to detect CC), though KLDN and ATDnet detect more CG lightning. We analyze statistics computed over the entire 5-year period, where we look at CG as well as total lightning (CC and CG combined). Statistics that are considered are the Probability of Detection (POD) and the so-called Lightning Activity Detection (LAD). POD is defined as the percentage of reference flashes the system detects compared to the total detections in the reference. LAD is defined as the fraction of system recordings of one or more flashes in predefined area boxes over a certain time period given the fact that the reference detects at least one flash, compared to the total recordings in the reference dataset. The reference for these statistics is taken to be either another dataset, or a dataset consisting of flashes detected by two datasets. Extreme thunderstorm case evaluation shows that the weather alert criterion for severe thunderstorm is reached by FLITS when this is not the case in KLDN and ATD, suggesting the need for KNMI to modify that weather alert criterion when using KLDN.
Evaluation of LANDSAT-4 TM and MSS ground geometry performance without ground control
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A.
1983-01-01
LANDSAT thematic mapper P-data of Washington, D.C., Harrisburg, PA, and Salton Sea, CA were analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known earth-surface geometry. Several tests of data geometry were performed. Intra-band and inter-band correlation and registration were investigated, exclusive of map-based ground truth. Specifically, the magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) were computed, and the inter-band integrity of registration was analyzed.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
Remote sensing of chlorophyll and temperature in marine and fresh waters.
NASA Technical Reports Server (NTRS)
Arvesen, J. C.; Millard, J. P.; Weaver, E. C.
1973-01-01
An airborne differential radiometer was demonstrated to be a sensitive, real-time detector of surface chlorophyll content in water bodies. The instrument continuously measures the difference in radiance between two wavelength bands, one centered near the maximum of the blue chlorophyll a absorption region and the other at a reference wavelength outside this region. Flights were made over fresh water lakes, marine waters, and an estuary, and the results were compared with 'ground truth' measurements of chlorophyll concentration. A correlation between output signal of the differential radiometer and the chlorophyll concentration was obtained. Examples of flight data are illustrated. Simultaneous airborne measurements of chlorophyll content and water temperature revealed that variations in chlorophyll are often associated with changes in temperature. Thus, simultaneous sensing of chlorophyll and temperature provides useful information for studies of marine food production, water pollution, and physical processes such as upwelling.
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
ConfocalGN: A minimalistic confocal image generator
NASA Astrophysics Data System (ADS)
Dmitrieff, Serge; Nédélec, François
Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.
The clinical thinking of Bion and the art of the Zen garden (Ryoan-ji).
Bucca, Maurizio
2007-01-01
It is my intention to suggest that there are several affinities between Bion's thought, in particular with such concepts as truth, absolute truth (O), the contact with O (at-one-ment), and the transformation into O, the act of faith, hallucinosis and illumination, and certain formulations referring to Zen Buddhism. These concepts can find, in turn, correspondences in the salient characteristics of the rock garden at Kyoto's Ryoan-ji temple.
Composition and assembly of a spectral and agronomic data base for 1980 spring small grain segments
NASA Technical Reports Server (NTRS)
Helmer, D.; Krantz, J.; Kinsler, M.; Tomkins, M.
1983-01-01
A data set was assembled which consolidates the LANDSAT spectral data, ground truth observation data, and analyst cloud screening data for 28 spring small grain segments collected during the 1980 crop year.
Improved Gridded Aerosol Data for India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gueymard, C.; Sengupta, M.
Using point data from ground sites in and around India equipped with multiwavelength sunphotometers, as well as gridded data from space measurements or from existing aerosol climatologies, an improved gridded database providing the monthly aerosol optical depth at 550 nm (AOD550) and Angstrom exponent (AE) over India is produced. Data from 83 sunphotometer sites are used here as ground truth tocalibrate, optimally combine, and validate monthly gridded data during the period from 2000 to 2012.
Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin
2017-10-11
Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved "Simple Truth" algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σ v i g (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks.
Modeling and Simulation in Support of Testing and Evaluation
1997-03-01
contains standardized automated test methodology, synthetic stimuli and environments based on TECOM Ground Truth data and physics . The VPG is a distributed...Systems Acquisition Management (FSAM) coursebook , Defense Systems Management College, January 1994. Crocker, Charles M. “Application of the Simulation
Ground Truthing the 'Conventional Wisdom' of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
U.S. Army School of the Americas: Background and Congressional Concerns
2001-04-16
report referred to was El Terrorismo de Estado en Colombia. Brussels, Ediciones NCOS, 1992. The report cited 247 military personnel alleged to have some...adequacy of the selection and screening CRS-3 2The U.N. Truth Commission Report on El Salvador and the U.S. Army School of the Americas. Washington Office...alumni included 48 out of 69 Salvadoran military members cited in the U.N. Truth Commission’s report on El Salvador for involvement in human rights
NASA Astrophysics Data System (ADS)
Wismüller, Axel; DSouza, Adora M.; Abidin, Anas Z.; Wang, Xixi; Hobbs, Susan K.; Nagarajan, Mahesh B.
2015-03-01
Echo state networks (ESN) are recurrent neural networks where the hidden layer is replaced with a fixed reservoir of neurons. Unlike feed-forward networks, neuron training in ESN is restricted to the output neurons alone thereby providing a computational advantage. We demonstrate the use of such ESNs in our mutual connectivity analysis (MCA) framework for recovering the primary motor cortex network associated with hand movement from resting state functional MRI (fMRI) data. Such a framework consists of two steps - (1) defining a pair-wise affinity matrix between different pixel time series within the brain to characterize network activity and (2) recovering network components from the affinity matrix with non-metric clustering. Here, ESNs are used to evaluate pair-wise cross-estimation performance between pixel time series to create the affinity matrix, which is subsequently subject to non-metric clustering with the Louvain method. For comparison, the ground truth of the motor cortex network structure is established with a task-based fMRI sequence. Overlap between the primary motor cortex network recovered with our model free MCA approach and the ground truth was measured with the Dice coefficient. Our results show that network recovery with our proposed MCA approach is in close agreement with the ground truth. Such network recovery is achieved without requiring low-pass filtering of the time series ensembles prior to analysis, an fMRI preprocessing step that has courted controversy in recent years. Thus, we conclude our MCA framework can allow recovery and visualization of the underlying functionally connected networks in the brain on resting state fMRI.
The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images
NASA Astrophysics Data System (ADS)
Fedotova, S.; Seredin, O.; Kushnir, O.
2017-05-01
In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.
2013-12-01
The Southwest U.S. Seismo-Acoustic Network (SUSSAN) is a collaborative project designed to produce infrasound event detection bulletins for the infrasound community for research purposes. We are aggregating a large, unique, near real-time data set with available ground truth information from seismo-acoustic arrays across New Mexico, Utah, Nevada, California, Texas and Hawaii. The data are processed in near real-time (~ every 20 minutes) with detections being made on individual arrays and locations determined for networks of arrays. The detection and location data are then combined with any available ground truth information and compiled into a bulletin that will be released to the general public directly and eventually through the IRIS infrasound event bulletin. We use the open source Earthworm seismic data aggregation software to acquire waveform data either directly from the station operator or via the Incorporated Research Institutions for Seismology Data Management Center (IRIS DMC), if available. The data are processed using InfraMonitor, a powerful infrasound event detection and localization software program developed by Stephen Arrowsmith at Los Alamos National Laboratory (LANL). Our goal with this program is to provide the infrasound community with an event database that can be used collaboratively to study various natural and man-made sources. We encourage participation in this program directly or by making infrasound array data available through the IRIS DMC or other means. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. R&A 5317326
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Rueckauer, Bodo; Delbruck, Tobi
2016-01-01
In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639
Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal
2016-05-01
Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panfil, J; Patel, R; Surucu, M
Purpose: To compare markerless template-based tracking of lung tumors using dual energy (DE) cone-beam computed tomography (CBCT) projections versus single energy (SE) CBCT projections. Methods: A RANDO chest phantom with a simulated tumor in the upper right lung was used to investigate the effectiveness of tumor tracking using DE and SE CBCT projections. Planar kV projections from CBCT acquisitions were captured at 60 kVp (4 mAs) and 120 kVp (1 mAs) using the Varian TrueBeam and non-commercial iTools Capture software. Projections were taken at approximately every 0.53° while the gantry rotated. Due to limitations of the phantom, angles for whichmore » the shoulders blocked the tumor were excluded from tracking analysis. DE images were constructed using a weighted logarithmic subtraction that removed bony anatomy while preserving soft tissue structures. The tumors were tracked separately on DE and SE (120 kVp) images using a template-based tracking algorithm. The tracking results were compared to ground truth coordinates designated by a physician. Matches with a distance of greater than 3 mm from ground truth were designated as failing to track. Results: 363 frames were analyzed. The algorithm successfully tracked the tumor on 89.8% (326/363) of DE frames compared to 54.3% (197/363) of SE frames (p<0.0001). Average distance between tracking and ground truth coordinates was 1.27 +/− 0.67 mm for DE versus 1.83+/−0.74 mm for SE (p<0.0001). Conclusion: This study demonstrates the effectiveness of markerless template-based tracking using DE CBCT. DE imaging resulted in better detectability with more accurate localization on average versus SE. Supported by a grant from Varian Medical Systems.« less
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
NASA Astrophysics Data System (ADS)
Berglund, J.; Mattila, J.; Rönnberg, O.; Heikkilä, J.; Bonsdorff, E.
2003-04-01
Submerged rooted macrophytes and drift algae were studied in shallow (0-1 m) brackish soft-bottom bays in the Åland Islands, N Baltic Sea, in 1997-2000. The study was performed by aerial photography and ground-truth sampling and the compatibility of the methods was evaluated. The study provided quantitative results on seasonal and inter-annual variation in growth, distribution and biomass of submerged macrophytes and drift algae. On an average, 18 submerged macrophyte species occurred in the studied bays. The most common species, by weight and occurrence, were Chara aspera, Cladophora glomerata, Pilayella littoralis and Potamogeton pectinatus. Filamentous green algae constituted 45-70% of the biomass, charophytes 25-40% and vascular plants 3-18%. A seasonal pattern with a peak in biomass in July-August was found and the mean biomass was negatively correlated with exposure. There were statistically significant differences in coverage among years, and among levels of exposure. The coverage was highest when exposure was low. Both sheltered and exposed bays were influenced by drift algae (30 and 60% occurrence in July-August) and there was a positive correlation between exposure and occurrence of algal accumulations. At exposed sites, most of the algae had drifted in from other areas, while at sheltered ones they were mainly of local origin. Data obtained by aerial photography and ground-truth sampling showed a high concordance, but aerial photography gave a 9% higher estimate than the ground-truth samples. The results can be applied in planning of monitoring and management strategies for shallow soft-bottom areas under potential threat of drift algae.
Simulation of brain tumors in MR images for evaluation of segmentation efficacy.
Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido
2009-04-01
Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).
Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; Liu, Qi; Yin, Guangfu
2018-01-01
Sensor-level functional connectivity topography (sFCT) contributes significantly to our understanding of brain networks. sFCT can be constructed using either electroencephalography (EEG) or magnetoencephalography (MEG). Here, we compared sFCT within the EEG modality and between EEG and MEG modalities. We first used simulations to look at how different EEG references-including the Reference Electrode Standardization Technique (REST), average reference (AR), linked mastoids (LM), and left mastoid references (LR)-affect EEG-based sFCT. The results showed that REST decreased the reference effects on scalp EEG recordings, making REST-based sFCT closer to the ground truth (sFCT based on ideal recordings). For the inter-modality simulation comparisons, we compared each type of EEG-sFCT with MEG-sFCT using three metrics to quantize the differences: Relative Error (RE), Overlap Rate (OR), and Hamming Distance (HD). When two sFCTs are similar, RE and HD are low, while OR is high. Results showed that among all reference schemes, EEG-and MEG-sFCT were most similar when the EEG was REST-based and the EEG and MEG were recorded simultaneously. Next, we analyzed simultaneously recorded MEG and EEG data from publicly available face-recognition experiments using a similar procedure as in the simulations. The results showed (1) if MEG-sFCT is the standard, REST-and LM-based sFCT provided results closer to this standard in the terms of HD; (2) REST-based sFCT and MEG-sFCT had the highest similarity in terms of RE; (3) REST-based sFCT had the most overlapping edges with MEG-sFCT in terms of OR. This study thus provides new insights into the effect of different reference schemes on sFCT and the similarity between MEG and EEG in terms of sFCT.
A new device for acquiring ground truth on the absorption of light by turbid waters
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator); Srna, R.; Treasure, W.
1974-01-01
The author has identified the following significant results. A new device, called a Spectral Attenuation Board, has been designed and tested, which enables ERTS-1 sea truth collection teams to monitor the attenuation depths of three colors continuously, as the board is being towed behind a boat. The device consists of a 1.2 x 1.2 meter flat board held below the surface of the water at a fixed angle to the surface of the water. A camera mounted above the water takes photographs of the board. The resulting film image is analyzed by a micro-densitometer trace along the descending portion of the board. This yields information on the rate of attenuation of light penetrating the water column and the Secchi depth. Red and green stripes were painted on the white board to approximate band 4 and band 5 of the ERTS MSS so that information on the rate of light absorption by the water column of light in these regions of the visible spectrum could be concurrently measured. It was found that information from a red, green, and white stripe may serve to fingerprint the composition of the water mass. A number of these devices, when automated, could also be distributed over a large region to provide a cheap method of obtaining valuable satellite ground truth data at present time intervals.
NASA Astrophysics Data System (ADS)
Tang, G.; Li, C.; Hong, Y.; Long, D.
2017-12-01
Proliferation of satellite and reanalysis precipitation products underscores the need to evaluate their reliability, particularly over ungauged or poorly gauged regions. However, it is really challenging to perform such evaluations over regions lacking ground truth data. Here, using the triple collocation (TC) method that is capable of evaluating relative uncertainties in different products without ground truth, we evaluate five satellite-based precipitation products and comparatively assess uncertainties in three types of independent precipitation products, e.g., satellite-based, ground-observed, and model reanalysis over Mainland China, including a ground-based precipitation dataset (the gauge based daily precipitation analysis, CGDPA), the latest version of the European reanalysis agency reanalysis (ERA-interim) product, and five satellite-based products (i.e., 3B42V7, 3B42RT of TMPA, IMERG, CMORPH-CRT, PERSIANN-CDR) on a regular 0.25° grid at the daily timescale from 2013 to 2015. First, the effectiveness of the TC method is evaluated by comparison with traditional methods based on ground observations in a densely gauged region. Results show that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) are close to those based on the traditional method with a maximum difference only up to 0.08 and 0.71 (mm/day) for CC and RMSE, respectively. Then, the TC method is applied to Mainland China and the Tibetan Plateau (TP). Results indicate that: (1) the overall performance of IMERG is better than the other satellite products over Mainland China; (2) over grid cells without rain gauges in the TP, IMERG and ERA show better performance than CGDPA, indicating the potential of remote sensing and reanalysis data over these regions and the inherent uncertainty of CGDPA due to interpolation using sparsely gauged data; (3) both TMPA-3B42 and CMORPH-CRT have some unexpected CC values over certain grid cells that contain water bodies, reaffirming the overestimation of precipitation over inland water bodies. Overall, the TC method provides not only reliable cross-validation results of precipitation estimates over Mainland China but also a new perspective as to compressively assess multi-source precipitation products, particularly over poorly gauged regions.
NASA Technical Reports Server (NTRS)
Atlas, D. (Editor); Thiele, O. W. (Editor)
1981-01-01
Global climate, agricultural uses for precipitation information, hydrological uses for precipitation, severe thunderstorms and local weather, global weather are addressed. Ground truth measurement, visible and infrared techniques, microwave radiometry and hybrid precipitation measurements, and spaceborne radar are discussed.
A Ranking-Theoretic Approach to Conditionals
ERIC Educational Resources Information Center
Spohn, Wolfgang
2013-01-01
Conditionals somehow express conditional beliefs. However, conditional belief is a bi-propositional attitude that is generally not truth-evaluable, in contrast to unconditional belief. Therefore, this article opts for an expressivistic semantics for conditionals, grounds this semantics in the arguably most adequate account of conditional belief,…
Ground Truth in Building Human Security
2012-11-01
Peace, Charles Call and Vanessa Wyeth, Editors, International Peace Institute, 2008, pp. 164-5. 9. Sandra F. Joireman, Where There is No...that company. http://grm.thomsonreuters.com/news/july-2011/thomson-reuters- completes-acquisition-of-manatron/ 73. P.J.M. van Oosterom, C.H.J
Ground Truthing the ‘Conventional Wisdom’ of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
Comparisons of Ground Truth and Remote Spectral Measurements of the FORMOSAT and ANDE Spacecrafts
NASA Technical Reports Server (NTRS)
JorgensenAbercromby, Kira; Hamada, Kris; Okada, Jennifer; Guyote, Michael; Barker, Edwin
2006-01-01
Determining the material type of objects in space is conducted using laboratory spectral reflectance measurements from common spacecraft materials and comparing the results to remote spectra. This past year, two different ground-truth studies commenced. The first, FORMOSAT III, is a Taiwanese set of six satellites to be launched in March 2006. The second is ANDE (Atmospheric Neutral Density Experiment), a Naval Research Laboratory set of two satellites set to launch from the Space Shuttle in November 2006. Laboratory spectra were obtained of the spacecraft and a model of the anticipated spectra response was created for each set of satellites. The model takes into account phase angle and orientation of the spacecraft relative to the observer. Once launched, the spacecraft are observed once a month to determine the space aging effects of materials as deduced from the remote spectra. Preliminary results will be shown of the FORMOSAT III comparison with laboratory data and remote data while results from only the laboratory data will be shown for the ANDE spacecraft.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
The ground-truth problem for satellite estimates of rain rate
NASA Technical Reports Server (NTRS)
North, Gerald R.; Valdes, Juan B.; Eunho, HA; Shen, Samuel S. P.
1994-01-01
In this paper a scheme is proposed to use a point raingage to compare contemporaneous measurements of rain rate from a single-field-of-view (FOV) estimate based on a satellite remote sensor such as a microwave radiometer. Even in the ideal case the measurements are different because one is at a point and the other is an area average over the field of view. Also the point gage will be located randomly inside the field of view on different overpasses. A space-time spectral formalism is combined with a simple stochastic rain field to find the mean-square deviations between the two systems. It is found that by combining about 60 visits of the satellite to the ground-truth site, the expected error can be reduced to about 10% of the standard deviation of the fluctuations of the systems alone. This seems to be a useful level of tolerance in terms of isolating and evaluating typical biases that might be contaminating retrieval algorithms.
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Identification of marsh vegetation and coastal land use in ERTS-1 imagery
NASA Technical Reports Server (NTRS)
Klemas, V.; Daiber, F. C.; Bartlett, D. S.
1973-01-01
Coastal vegetation species appearing in the ERTS-1 images taken of Delaware Bay on August 16, and October 10, 1972 have been correlated with ground truth vegetation maps and imagery obtained from high altitude RB-57 and U-2 overflights. The vegetation maps of the entire Delaware Coast were prepared during the summer of 1972 and checked out with ground truth data collected on foot, in small boats, and from low-altitude aircraft. Multispectral analysis of high altitude RB-57 and U-2 photographs indicated that five vegetation communities could be clearly discriminated from 60,000 feet altitude including: (1) salt marsh cord grass, (2) salt marsh hay and spike grass, (3) reed grass, (4) high tide bush and sea myrtle, and (5) a group of fresh water species found in impoundments built to attract water fowl. All of these species are shown in fifteen overlay maps, covering all of Delaware's wetlands prepared to match the USGS topographic map size of 1:24,000.
Toward building a comprehensive data mart
NASA Astrophysics Data System (ADS)
Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.
2004-04-01
To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.
NASA Astrophysics Data System (ADS)
Jaumann, Ralf; Bibring, Jean-Pierre; Glassmeier, Karl-Heinz; Grott, Matthias; Ho, Tra-Mi; Ulamec, Stepahn; Schmitz, Nicole; Auster, Ulrich; Biele, Jens; Kuninaka, Hitoshi; Okada, Tatsuaki; Yoshikawa, Makoto; Watanabe, Sei-ichhiro; Fujimoto, Masaki; Spohn, Tilman; Koncz, Alexander; Michaelis, Harald
2014-05-01
MASCOT, a Mobile Asteroid Surface Scout, will support JAXA's Hayabusa 2 mission to investigate the C-type asteroid 1999 JU3 (1). The German Aer-ospace Center (DLR) develops MASCOT with contributions from CNES (France) (2,3). Main objective is to in-situ map the asteroid's geomorpholo-gy, the intimate structure, texture and composition of the regolith (dust, soil and rocks), and the thermal, mechanical, and magnetic properties of the sur-face in order to provide ground truth for the orbiter remote measurements, support the selection of sampling sites, and provide context information for the returned samples. MASCOT comprises a payload of four scientific in-struments: camera, radiometer, magnetometer and hyperspectral microscope. C- and D-type asteroids hold clues to the origin of the solar system, the for-mation of planets, the origins of water and life on Earth, the protection of Earth from impacts, and resources for future human exploration. C- and D-types are dark and difficult to study from Earth, and have only been glimpsed by spacecraft. While results from recent missions (e.g., Hayabusa, NEAR (4, 5, 6)) have dramatically increased our understanding of asteroids, important questions remain. For example, characterizing the properties of asteroid regolith in-situ would deliver important ground truth for further understanding telescopic and orbital observations and samples of such aster-oids. MASCOT will descend and land on the asteroid and will change its position two times by hopping. This enables measurements during descent, at the landing and hopping positions #1-3, and during hopping. References: (1) Vilas, F., Astronomical J. 1101-1105, 2008; (2) Ulamec, S., et al., Acta Astronautica, Vol. 93, pp. 460-466; (3) Jaumann et al., 45th LPSC, Houston; (4) Special Issue, Science, Vol. 312 no. 5778, 2006; (5) Special Issue Science, Vol. 333 no. 6046, 2011. (6) Bell, L., Mitton, J-., Cambridge Univ. Press, 2002.
NASA Astrophysics Data System (ADS)
Jaumann, Ralf; Bibring, Jean-Pierre; Glassmeier, Karl-Heinz; Grott, Matthias; Ho, Tra-Mie; Ulamec, Stephan; Schmitz, Nicole; Auster, Hans-Ulrich; Biele, Jens; Kuninaka, Hitoshi; Okada, Tatsuaki; Yoshikawa, Makoto; Watanabe, Sei-ichhiro; Fujimoto, Masaki; Spohn, Tilman
2013-04-01
Mascot, a Mobile Asteroid Surface Scout, will support JAXA's Hayabusa 2 mission to investigate the C-type asteroid 1999 JU3 (1). The German Aero-space Center (DLR) develops Mascot with contributions from CNES (France) (2). Main objective is to in-situ map the asteroid's geomorphology, the intimate structure, texture and composition of the regolith (dust, soil and rocks), and the thermal, mechanical, and magnetic properties of the surface in order to provide ground truth for the orbiter remote measurements, sup-port the selection of sampling sites, and provide context information for the returned samples. Mascot comprises a payload of four scientific instruments: camera, radiometer, magnetometer and hyperspectral microscope. C- and D-type asteroids hold clues to the origin of the solar system, the formation of planets, the origins of water and life on Earth, the protection of Earth from impacts, and resources for future human exploration. C- and D-types are dark and difficult to study from Earth, and have only been glimpsed by spacecraft. While results from recent missions (e.g., Hayabusa, NEAR (3, 4, 5)) have dramatically increased our understanding of asteroids, important questions remain. For example, characterizing the properties of asteroid reg-olith in-situ would deliver important ground truth for further understanding telescopic and orbital observations and samples of such asteroids. Mascot will descend and land on the asteroid and will change its position two times by hopping. This enables measurements during descent, at the landing and hopping positions #1-3, and during hopping. References: (1) Vilas, F., Astronomical J. 1101-1105, 2008; (2) Ulamec, S., et al., COSPAR, General Assembly, Mysore/India, 2012; (3) Special Issue, Science, Vol. 312 no. 5778, 2006; (4) Special Issue Science, Vol. 333 no. 6046, 2011; (5) Bell, L., Mitton, J-., Cambridge Univ. Press, 2002.
Structure and Formation of Comets: Updates from Post-Rosetta Solid Fraction Analyses
NASA Astrophysics Data System (ADS)
Levasseur-Regourd, A. C.; Bentley, M. S.; Kofman, W. W.; Brouet, Y.; Ciarletti, V.; Hadamcik, E.; Herique, A.; Lasue, J.; Mannel, T.; Schmied, R.
2016-12-01
The combination of investigations of 67P/C-G by Rosetta, theoretical and experimental studies, and remote observations allowed unprecedented insight into the structure and formation of comets. Rosetta mission has provided ground-truth for the low-density and high porosity of the nucleus, without heterogeneities larger than a few meters in its small lobe [1,2]. Further studies related to CONSERT experiment now suggest that the porosity increases inside the nucleus [3,4]. Rosetta has also provided ground-truth for the aggregated structure of dust particles within a wide range of sizes in the inner coma [e.g. 5-7]. Such discoveries confirm previous interpretations of remote observations of solar light scattered by dust in cometary comae. Differences in structure between the two parts of the nucleus, strongly suspected from previous high-resolution images of the surface [8] and possibly suggested from some remote observations in fragmenting sub-nuclei [9], might be pointed out from data obtained shortly before Rosetta controlled descent in September 2016. Further analyses by MIDAS of dust particles morphology at submicron-sizes [7,10], as well as compilations of remote observations of solar light scattered by 67P/C-G [11], are presently taking place. We will discuss how such results could lead to a better understanding of dust growth processes during the formation, specifically of 67P/C-G, and more generally, thanks to the link now provided between structural properties of dust and remote polarimetric observations, of comet's nuclei in the early Solar System. References. 1 Kofman et al. Science 2015. 2 Pätzold et al. Nature 2016. 3 Ciarletti et al. A&A 2015. 4 Brouet et al. MNRAS 2016 (under revision). 5. Rotundi et al. Science 2015. 6 Langevin et al. Icarus 2016. 7 Bentley et al. Nature 2016. 8 Massironi et al. Nature 2016. 9 Hadamcik et al. A&A 2016. 10. Mannel et al. Leiden symposium 2016. 11 Hadamcik et al. Leiden symposium 2016.
NASA Astrophysics Data System (ADS)
Ali, Abder-Rahman A.; Deserno, Thomas M.
2012-02-01
Malignant melanoma is the third most frequent type of skin cancer and one of the most malignant tumors, accounting for 79% of skin cancer deaths. Melanoma is highly curable if diagnosed early and treated properly as survival rate varies between 15% and 65% from early to terminal stages, respectively. So far, melanoma diagnosis is depending subjectively on the dermatologist's expertise. Computer-aided diagnosis (CAD) systems based on epiluminescense light microscopy can provide an objective second opinion on pigmented skin lesions (PSL). This work systematically analyzes the evidence of the effectiveness of automated melanoma detection in images from a dermatoscopic device. Automated CAD applications were analyzed to estimate their diagnostic outcome. Searching online databases for publication dates between 1985 and 2011, a total of 182 studies on dermatoscopic CAD were found. With respect to the systematic selection criterions, 9 studies were included, published between 2002 and 2011. Those studies formed databases of 14,421 dermatoscopic images including both malignant "melanoma" and benign "nevus", with 8,110 images being available ranging in resolution from 150 x 150 to 1568 x 1045 pixels. Maximum and minimum of sensitivity and specificity are 100.0% and 80.0% as well as 98.14% and 61.6%, respectively. Area under the receiver operator characteristics (AUC) and pooled sensitivity, specificity and diagnostics odds ratio are respectively 0.87, 0.90, 0.81, and 15.89. So, although that automated melanoma detection showed good accuracy in terms of sensitivity, specificity, and AUC, but diagnostic performance in terms of DOR was found to be poor. This might be due to the lack of dermatoscopic image resources (ground truth) that are needed for comprehensive assessment of diagnostic performance. In future work, we aim at testing this hypothesis by joining dermatoscopic images into a unified database that serves as a standard reference for dermatology related research in PSL classification.
The use of remote sensing in solving Florida's geological and coastal engineering problems
NASA Technical Reports Server (NTRS)
Brooks, H. K.; Ruth, B. E.; Wang, Y. H.; Ferguson, R. L.
1977-01-01
LANDSAT imagery and NASA high altitude color infrared (CIR) photography were used to select suitable sites for sanitary landfill in Volusia County, Florida and to develop techniques for preventing sand deposits in the Clearwater inlet. Activities described include the acquisition of imagery, its analysis by the IMAGE 100 system, conventional photointerpretation, evaluation of existing data sources (vegetation, soil, and ground water maps), site investigations for ground truth, and preparation of displays for reports.
SLAPex Freeze/Thaw 2015: The First Dedicated Soil Freeze/Thaw Airborne Campaign
NASA Technical Reports Server (NTRS)
Kim, Edward; Wu, Albert; DeMarco, Eugenia; Powers, Jarrett; Berg, Aaron; Rowlandson, Tracy; Freeman, Jacqueline; Gottfried, Kurt; Toose, Peter; Roy, Alexandre;
2016-01-01
Soil freezing and thawing is an important process in the terrestrial water, energy, and carbon cycles, marking the change between two very different hydraulic, thermal, and biological regimes. NASA's Soil Moisture Active/Passive (SMAP) mission includes a binary freeze/thaw data product. While there have been ground-based remote sensing field measurements observing soil freeze/thaw at the point scale, and airborne campaigns that observed some frozen soil areas (e.g., BOREAS), the recently-completed SLAPex Freeze/Thaw (F/T) campaign is the first airborne campaign dedicated solely to observing frozen/thawed soil with both passive and active microwave sensors and dedicated ground truth, in order to enable detailed process-level exploration of the remote sensing signatures and in situ soil conditions. SLAPex F/T utilized the Scanning L-band Active/Passive (SLAP) instrument, an airborne simulator of SMAP developed at NASA's Goddard Space Flight Center, and was conducted near Winnipeg, Manitoba, Canada, in October/November, 2015. Future soil moisture missions are also expected to include soil freeze/thaw products, and the loss of the radar on SMAP means that airborne radar-radiometer observations like those that SLAP provides are unique assets for freeze/thaw algorithm development. This paper will present an overview of SLAPex F/T, including descriptions of the site, airborne and ground-based remote sensing, ground truth, as well as preliminary results.
Usability of calcium carbide gas pressure method in hydrological sciences
NASA Astrophysics Data System (ADS)
Arsoy, S.; Ozgur, M.; Keskin, E.; Yilmaz, C.
2013-10-01
Soil moisture is a key engineering variable with major influence on ecological and hydrological processes as well as in climate, weather, agricultural, civil and geotechnical applications. Methods for quantification of the soil moisture are classified into three main groups: (i) measurement with remote sensing, (ii) estimation via (soil water balance) simulation models, and (iii) measurement in the field (ground based). Remote sensing and simulation modeling require rapid ground truthing with one of the ground based methods. Calcium carbide gas pressure (CCGP) method is a rapid measurement procedure for obtaining soil moisture and relies on the chemical reaction of the calcium carbide reagent with the water in soil pores. However, the method is overlooked in hydrological science applications. Therefore, the purpose of this study is to evaluate the usability of the CCGP method in comparison with standard oven-drying and dielectric methods in terms of accuracy, time efficiency, operational ease, cost effectiveness and safety for quantification of the soil moisture over a wide range of soil types. The research involved over 250 tests that were carried out on 15 different soil types. It was found that the accuracy of the method is mostly within ±1% of soil moisture deviation range in comparison to oven-drying, and that CCGP method has significant advantages over dielectric methods in terms of accuracy, cost, operational ease and time efficiency for the purpose of ground truthing.
Rep. Farr, Sam [D-CA-20
2013-02-14
House - 04/08/2013 Referred to the Subcommittee on Crime, Terrorism, Homeland Security, And Investigations. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Enrichment of OpenStreetMap Data Completeness with Sidewalk Geometries Using Data Mining Techniques.
Mobasheri, Amin; Huang, Haosheng; Degrossi, Lívia Castro; Zipf, Alexander
2018-02-08
Tailored routing and navigation services utilized by wheelchair users require certain information about sidewalk geometries and their attributes to execute efficiently. Except some minor regions/cities, such detailed information is not present in current versions of crowdsourced mapping databases including OpenStreetMap. CAP4Access European project aimed to use (and enrich) OpenStreetMap for making it fit to the purpose of wheelchair routing. In this respect, this study presents a modified methodology based on data mining techniques for constructing sidewalk geometries using multiple GPS traces collected by wheelchair users during an urban travel experiment. The derived sidewalk geometries can be used to enrich OpenStreetMap to support wheelchair routing. The proposed method was applied to a case study in Heidelberg, Germany. The constructed sidewalk geometries were compared to an official reference dataset ("ground truth dataset"). The case study shows that the constructed sidewalk network overlays with 96% of the official reference dataset. Furthermore, in terms of positional accuracy, a low Root Mean Square Error (RMSE) value (0.93 m) is achieved. The article presents our discussion on the results as well as the conclusion and future research directions.
Truth in Accounting Act of 2009
Rep. Bachmann, Michele [R-MN-6
2009-02-10
House - 05/04/2009 Referred to the Subcommittee on Government Management, Organization, and Procurement. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Keihaninejad, Shiva; Ryan, Natalie S; Malone, Ian B; Modat, Marc; Cash, David; Ridgway, Gerard R; Zhang, Hui; Fox, Nick C; Ourselin, Sebastien
2012-01-01
Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations.
NASA Astrophysics Data System (ADS)
Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.
2015-05-01
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Herrmann, C; Groh, C M; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-05-07
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [(18)F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-01-01
Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery. PMID:25884892
Rep. Murphy, Patrick J. [D-PA-8
2010-01-27
House - 02/23/2010 Referred to the Subcommittee on Higher Education, Lifelong Learning, and Competitiveness. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Knieps, Melanie; Granhag, Pär A; Vrij, Aldert
2014-01-01
Prospection is thinking about possible future states of the world. Commitment to perform a future action-commonly referred to as intention-is a specific type of prospection. This knowledge is relevant when trying to assess whether a stated intention is a lie or the truth. An important observation is that thinking of, and committing to, future actions often evoke vivid and detailed mental images. One factor that affects how specific a person experiences these simulations is location-familiarity. The purpose of this study was to examine to what extent location-familiarity moderates how liars and truth tellers describe a mental image in an investigative interview. Liars were instructed to plan a criminal act and truth tellers were instructed to plan a non-criminal act. Before they could carry out these acts, the participants were intercepted and interviewed about the mental images they may have had experienced in this planning phase. Truth tellers told the truth whereas liars used a cover story to mask their criminal intentions. As predicted, the results showed that the truth tellers reported a mental image significantly more often than the liars. If a mental image was reported, the content of the descriptions did not differ between liars and truth tellers. In a post interview questionnaire, the participants rated the vividness (i.e., content and clarity) of their mental images. The ratings revealed that the truth tellers had experienced their mental images more vividly during the planning phase than the liars. In conclusion, this study indicates that both prototypical and specific representations play a role in prospection. Although location-familiarity did not moderate how liars and truth tellers describe their mental images of the future, this study allows some interesting insights into human future thinking. How these findings can be helpful for distinguishing between true and false intentions will be discussed.
Knieps, Melanie; Granhag, Pär A.; Vrij, Aldert
2014-01-01
Prospection is thinking about possible future states of the world. Commitment to perform a future action—commonly referred to as intention—is a specific type of prospection. This knowledge is relevant when trying to assess whether a stated intention is a lie or the truth. An important observation is that thinking of, and committing to, future actions often evoke vivid and detailed mental images. One factor that affects how specific a person experiences these simulations is location-familiarity. The purpose of this study was to examine to what extent location-familiarity moderates how liars and truth tellers describe a mental image in an investigative interview. Liars were instructed to plan a criminal act and truth tellers were instructed to plan a non-criminal act. Before they could carry out these acts, the participants were intercepted and interviewed about the mental images they may have had experienced in this planning phase. Truth tellers told the truth whereas liars used a cover story to mask their criminal intentions. As predicted, the results showed that the truth tellers reported a mental image significantly more often than the liars. If a mental image was reported, the content of the descriptions did not differ between liars and truth tellers. In a post interview questionnaire, the participants rated the vividness (i.e., content and clarity) of their mental images. The ratings revealed that the truth tellers had experienced their mental images more vividly during the planning phase than the liars. In conclusion, this study indicates that both prototypical and specific representations play a role in prospection. Although location-familiarity did not moderate how liars and truth tellers describe their mental images of the future, this study allows some interesting insights into human future thinking. How these findings can be helpful for distinguishing between true and false intentions will be discussed. PMID:25071648
NASA Technical Reports Server (NTRS)
Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.
1975-01-01
Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.
Global Ground Truth Data Set with Waveform and Improved Arrival Data
2006-09-29
local network. Sb) c) -I w .15Ř’ *•S| -195.4’ - .. " -in’ 410 -1115 -IisA " l Figure 4. (a) RCA geometry for the Kilauea Volcano south flank, Hawaii ...Seismic Research Review: Ground-Based Nuclear Explosion Monitoring Technologies Our next example (Figure 4) is from the south flank of Kilauea Volcano ...status all 56 events, including the two offshore events near the underwater volcano , Loihi, off the coast of Hawaii and more than 20 km outside the
Number, Infinity and Truth: Reflections on the Spiritual in Mathematics.
ERIC Educational Resources Information Center
Rauff, James V.
2000-01-01
Mathematics has had a spiritual aspect throughout its history. Discusses the nature of the interplay between mathematics and spirituality in some traditional and modern contexts. (Contains 29 references.) (ASK)
Equator and High-Latitude Ionosphere-to-Magnetosphere Research
2010-12-04
characterizing plasma velocity profile in the heated region above HAARP has been clearly established. Specification of D region absorption from Digisonde...Electron density profile, Ground truth, Cal/Val, Doppler skymap, HAARP , Plasma velocity profile, Ionogram autoscaling, D region absorption...2 3 HAARP INVESTIGATIONS ............................................................................ 5 3.1
Development of Mine Explosion Ground Truth Smart Sensors
2011-09-01
interest. The two candidates are the GS11-D by Oyo Geospace that is used extensively in seismic monitoring of geothermal fields and the Sensor Nederland SM...Technologies 853 Figure 4. Our preferred sensors and processor for the GTMS. (a) Sensor Nederland SM-6 geophone with emplacement spike. (b
Toward a Methodology of Death: Deleuze's "Event" as Method for Critical Ethnography
ERIC Educational Resources Information Center
Rodriguez, Sophia
2016-01-01
This article examines how qualitative researchers, specifically ethnographers, might utilize complex philosophical concepts in order to disrupt the normative truth-telling practices embedded in social science research. Drawing on my own research experiences, I move toward a methodology of death (for researcher/researched alike) grounded in…
Global Ground Truth Data Set with Waveform and Arrival Data
2007-07-30
Redonda, Leeward Islands 15/13 Rowe, C.A., C.H. Thurber and R.A. White, Dome growth behavior at Soufriere Hills Volcano, Montserrat, revealed by...Thurber and R.A. White, Dome growth behavior at Soufriere Hills Volcano, Montserrat, revealed by relocation of volcanic event swarms, 1995-1996, J. Volc
An integrated approach to mapping forest conditions in the Southern Appalachians (North Carolina)
Weimin Xi; Lei Wang; Andrew G Birt; Maria D. Tchakerian; Robert N. Coulson; Kier D. Klepzig
2009-01-01
Accurate and continuous forest cover information is essential for forest management and restoration (SAMAB 1996, Xi et al. 2007). Ground-truthed, spatially explicit forest data, however, are often limited to federally managed land or large-scale commercial forestry operations where forest inventories are regularly collected. Moreover,...
The Role of Science in Behavioral Disorders.
ERIC Educational Resources Information Center
Kauffman, James M.
1999-01-01
A scientific, rule-governed approach to solving problems suggests the following assumptions: we need different rules for different purposes; rules are grounded in values; the origins and applications of rules are often misunderstood; personal experience and idea popularity are unreliable; and all truths are tentative. Each assumption is related to…
Spatially Explicit West Nile Virus Risk Modeling in Santa Clara County, CA
USDA-ARS?s Scientific Manuscript database
A geographic information systems model designed to identify regions of West Nile virus (WNV) transmission risk was tested and calibrated with data collected in Santa Clara County, California. American Crows that died from WNV infection in 2005, provided spatial and temporal ground truth. When the mo...
Spatially explicit West Nile virus risk modeling in Santa Clara County, California
USDA-ARS?s Scientific Manuscript database
A previously created Geographic Information Systems model designed to identify regions of West Nile virus (WNV) transmission risk is tested and calibrated in Santa Clara County, California. American Crows that died from WNV infection in 2005 provide the spatial and temporal ground truth. Model param...
Near infrared color aerial photography (-1:7200) of Yaquina Bay, Oregon, flown at minus tides during summer months of 1997 was used to produce digital stereo ortho-photographs covering tidally exposed eelgrass habitat. GIS analysis coupled with GPS positioning of ground-truth da...
76 FR 36934 - Endangered Species; Marine Mammals; Receipt of Applications for Permit
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-23
... quantitative information or studies; and (2) Those that include citations to, and analyses of, the applicable... bears (Ursus maritimus) by adjusting the video camera equipment and conducting aerial surveys using FLIR (forward looking infrared) and ground-truth surveys with snowmobiles near dens for the purpose of...
Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel
2018-04-03
Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.
BusyBee Web: metagenomic data analysis by bootstrapped supervised binning and annotation
Kiefer, Christina; Fehlmann, Tobias; Backes, Christina
2017-01-01
Abstract Metagenomics-based studies of mixed microbial communities are impacting biotechnology, life sciences and medicine. Computational binning of metagenomic data is a powerful approach for the culture-independent recovery of population-resolved genomic sequences, i.e. from individual or closely related, constituent microorganisms. Existing binning solutions often require a priori characterized reference genomes and/or dedicated compute resources. Extending currently available reference-independent binning tools, we developed the BusyBee Web server for the automated deconvolution of metagenomic data into population-level genomic bins using assembled contigs (Illumina) or long reads (Pacific Biosciences, Oxford Nanopore Technologies). A reversible compression step as well as bootstrapped supervised binning enable quick turnaround times. The binning results are represented in interactive 2D scatterplots. Moreover, bin quality estimates, taxonomic annotations and annotations of antibiotic resistance genes are computed and visualized. Ground truth-based benchmarks of BusyBee Web demonstrate comparably high performance to state-of-the-art binning solutions for assembled contigs and markedly improved performance for long reads (median F1 scores: 70.02–95.21%). Furthermore, the applicability to real-world metagenomic datasets is shown. In conclusion, our reference-independent approach automatically bins assembled contigs or long reads, exhibits high sensitivity and precision, enables intuitive inspection of the results, and only requires FASTA-formatted input. The web-based application is freely accessible at: https://ccb-microbe.cs.uni-saarland.de/busybee. PMID:28472498
Truth in Employment Act of 2009
Sen. DeMint, Jim [R-SC
2009-06-10
Senate - 06/10/2009 Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Astrophysics Data System (ADS)
Trunk, Laura; Bernard, Alain
2008-12-01
A two-channel or split-window algorithm designed to correct for atmospheric conditions was applied to thermal images taken by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) of Lake Yugama on Kusatsu-Shirane volcano in Japan in order to measure the temperature of its crater lake. These temperature calculations were validated using lake water temperatures that were collected on the ground. Overall, the agreement between the temperatures calculated using the split-window method and ground truth is quite good, typically ± 1.5 °C for cloud-free images. Data from fieldwork undertaken in the summer of 2004 at Kusatsu-Shirane allow a comparison of ground-truth data with the radiant temperatures measured using ASTER imagery. Further images were analyzed of Ruapehu, Poás, Kawah Ijen, and Copahué volcanoes to acquire time-series of lake temperatures. A total of 64 images of these 4 volcanoes covering a wide range of geographical locations and climates were analyzed. Results of the split-window algorithm applied to ASTER images are reliable for monitoring thermal changes in active volcanic lakes. These temperature data, when considered in conjunction with traditional volcano monitoring techniques, lead to a better understanding of whether and how thermal changes in crater lakes aid in eruption forecasting.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
NASA Technical Reports Server (NTRS)
Rust, W. D.; Macgorman, D. R.
1985-01-01
During FY-85, Researchers conducted a field program and analyzed data. The field program incorporated coordinated measurements made with a NASA U2. Results include the following: (1) ground truth measurements of lightning for comparison with those obtained by the U2; (2) analysis of dual-Doppler radar and dual-VHF lightning mapping data from a supercell storm; (3) analysis of synoptic conditions during three simultaneous storm systems on 13 May 1983 when unusually large numbers of positive cloud-to-ground (+CG) flashes occurred; (4) analysis of extremely low frequency (ELF) wave forms; and (5) an assessment of a cloud -ground strike location system using a combination of mobile laboratory and fixed-base TV video data.
NASA Astrophysics Data System (ADS)
Yeom, Jong-Min; Han, Kyung-Soo; Kim, Jae-Jin
2012-05-01
Solar surface insolation (SSI) represents how much solar radiance reaches the Earth's surface in a specified area and is an important parameter in various fields such as surface energy research, meteorology, and climate change. This study calculates insolation using Multi-functional Transport Satellite (MTSAT-1R) data with a simplified cloud factor over Northeast Asia. For SSI retrieval from the geostationary satellite data, the physical model of Kawamura is modified to improve insolation estimation by considering various atmospheric constituents, such as Rayleigh scattering, water vapor, ozone, aerosols, and clouds. For more accurate atmospheric parameterization, satellite-based atmospheric constituents are used instead of constant values when estimating insolation. Cloud effects are a key problem in insolation estimation because of their complicated optical characteristics and high temporal and spatial variation. The accuracy of insolation data from satellites depends on how well cloud attenuation as a function of geostationary channels and angle can be inferred. This study uses a simplified cloud factor that depends on the reflectance and solar zenith angle. Empirical criteria to select reference data for fitting to the ground station data are applied to suggest simplified cloud factor methods. Insolation estimated using the cloud factor is compared with results of the unmodified physical model and with observations by ground-based pyranometers located in the Korean peninsula. The modified model results show far better agreement with ground truth data compared to estimates using the conventional method under overcast conditions.
NASA Astrophysics Data System (ADS)
Marinelli, Valerio; Cremonese, Edoardo; Diémoz, Henri; Siani, Anna Maria
2017-04-01
The European Space Agency (ESA) is spending notable effort to put in operation a new generation of advanced Earth-observation satellites, the Sentinel constellation. In particular, the Sentinel-2 host an instrumental payload mainly consisting in a MultiSpectral Instrument (MSI) imaging sensor, capable of acquiring high-resolution imagery of the Earth surface and atmospheric reflectance at selected spectral bands, hence providing complementary measurements to ground-based radiometric stations. The latter can provide reference data for validating the estimates from spaceborne instruments such as Sentinel-2A (operating since October 2015), whose aerosol optical thickness (AOT) values, can be obtained from correcting SWIR (2190 nm) reflectance with an improved dense dark vegetation (DDV) algorithm. In the Northwestern European Alps (Saint-Christophe, 45.74°N, 7.36°E) a Prede POM-02 sun/sky aerosol photometer has been operating for several years within the EuroSkyRad network by the Environmental Protection Agency of Aosta Valley (ARPA Valle d'Aosta), gathering direct sun and diffuse sky radiance for retrieving columnar aerosol optical properties. This aerosol optical depth (AOD) dataset represents an optimal ground-truth for the corresponding Sentinel-2 estimates obtained with the Sen2cor processor in the challenging environment of the Alps (complex topography, snow-covered surfaces). We show the deviations between the two measurement series and propose some corrections to enhance the overall accuracy of satellite estimates.
ERTS-1 imagery and native plant distributions
NASA Technical Reports Server (NTRS)
Musick, H. B.; Mcginnies, W.; Haase, E.; Lepley, L. K.
1974-01-01
A method is developed for using ERTS spectral signature data to determine plant community distribution and phenology without resolving individual plants. An Exotech ERTS radiometer was used near ground level to obtain spectral signatures for a desert plant community, including two shrub species, ground covered with live annuals in April and dead ones in June, and bare ground. It is shown that comparisons of scene types can be made when spectral signatures are expressed as a ratio of red reflectivity to IR reflectivity or when they are plotted as red reflectivity vs. IR reflectivity, in which case the signature clusters of each component are more distinct. A method for correcting and converting the ERTS radiance values to reflectivity values for comparison with ground truth data is appended.
Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data
NASA Astrophysics Data System (ADS)
d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter
2010-12-01
IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Helicopter Approach Capability Using the Differential Global Positioning System
NASA Technical Reports Server (NTRS)
Kaufmann, David N.
1994-01-01
The results of flight tests to determine the feasibility of using the Global Positioning System (GPS) in the Differential mode (DGPS) to provide high accuracy, precision navigation and guidance for helicopter approaches to landing are presented. The airborne DGPS receiver and associated equipment is installed in a NASA UH-60 Black Hawk helicopter. The ground-based DGPS reference receiver is located at a surveyed test site and is equipped with a real-time VHF data link to transmit correction information to the airborne DGPS receiver. The corrected airborne DGPS information, together with the preset approach geometry, is used to calculate guidance commands which are sent to the aircraft's approach guidance instruments. The use of DGPS derived guidance for helicopter approaches to landing is evaluated by comparing the DGPS data with the laser tracker truth data. The errors indicate that the helicopter position based on DGPS guidance satisfies the International Civil Aviation Organization (ICAO) Category 1 (CAT 1) lateral and vertical navigational accuracy requirements.
The MicrOmega Investigation Onboard Hayabusa2
NASA Astrophysics Data System (ADS)
Bibring, J.-P.; Hamm, V.; Langevin, Y.; Pilorget, C.; Arondel, A.; Bouzit, M.; Chaigneau, M.; Crane, B.; Darié, A.; Evesque, C.; Hansotte, J.; Gardien, V.; Gonnod, L.; Leclech, J.-C.; Meslier, L.; Redon, T.; Tamiatto, C.; Tosti, S.; Thoores, N.
2017-07-01
MicrOmega is a near-IR hyperspectral microscope designed to characterize in situ the texture and composition of the surface materials of the Hayabusa2 target asteroid. MicrOmega is implemented within the MASCOT lander (Ho et al. in Space Sci. Rev., 2016, this issue, doi:10.1007/s11214-016-0251-6). The spectral range (0.99-3.65 μm) and the spectral sampling (20 cm^{-1}) of MicrOmega have been chosen to allow the identification of most potential constituent minerals, ices and organics, within each 25 μm pixel of the 3.2× 3.2 mm2 FOV. Such an unprecedented characterization will (1) enable the identification of most major and minor phases, including the potential organic phases, and ascribe their mineralogical context, as a critical set of clues to decipher the origin and evolution of this primitive body, and (2) provide the ground truth for the orbital measurements as well as a reference for the analyses later performed on returned samples.
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
NASA Technical Reports Server (NTRS)
1973-01-01
Evaluation of low altitude oblique photography obtained by hand-held cameras was useful in determining specifications of operational mission requirements for conventional smaller-scaled vertical photography. Remote sensing techniques were used to assess the rapid destruction of marsh areas at Pointe Mouillee. In an estuarian environment where shoreline features change yearly, there is a need for revision in existing area maps. A land cover inventory, mapped from aerial photography, provided essential data necessary for determining adjacent lands suitable for marshland development. To quantitatively assess the wetlands environment, a detailed inventory of vegetative communities (19 categories) was made using color infrared photography and intensive ground truth. A carefully selected and well laid-out transect was found to be a key asset to photointerpretation and to the analysis of vegetative conditions. Transect data provided the interpreter with locally representative areas of various vegetative types. This facilitated development of a photointerpretation key. Additional information on vegetative conditions in the area was also obtained by evaluating the transect data.
NASA Astrophysics Data System (ADS)
Stoker, C.; Dunagan, S.; Stevens, T.; Amils, R.; Gómez-Elvira, J.; Fernández, D.; Hall, J.; Lynch, K.; Cannon, H.; Zavaleta, J.; Glass, B.; Lemke, L.
2004-03-01
The results of an drilling experiment to search for a subsurface biosphere in a pyritic mineral deposit at Rio Tinto, Spain, are described. The experiment provides ground truth for a simulation of a Mars drilling mission to search for subsurface life.
Forest statistics for Arkansas counties - 1979
Renewable Resources Evaluation Research Work Unit
1979-01-01
This report tabulates information from a new forest survey of Arkansas completed in 1979 by the Renewable Resources Evaluation Research Unit of the Southern Forest Experiment Station. Forest area was estimated from aerial photos with an adjustment for ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid...
Active microwave water equivalence
NASA Technical Reports Server (NTRS)
Boyne, H. S.; Ellerbruch, D. A.
1980-01-01
Measurements of water equivalence using an active FM-CW microwave system were conducted over the past three years at various sites in Colorado, Wyoming, and California. The measurement method is described. Measurements of water equivalence and stratigraphy are compared with ground truth. A comparison of microwave, federal sampler, and snow pillow measurements at three sites in Colorado is described.
Medium Spatial Resolution Satellite Characterization
NASA Technical Reports Server (NTRS)
Stensaas, Greg
2007-01-01
This project provides characterization and calibration of aerial and satellite systems in support of quality acquisition and understanding of remote sensing data, and verifies and validates the associated data products with respect to ground and and atmospheric truth so that accurate value-added science can be performed. The project also provides assessment of new remote sensing technologies.
EREP geothermal. [northern California
NASA Technical Reports Server (NTRS)
Johnston, E. W. (Principal Investigator); Dunklee, A. L.; Wychgram, D. C.
1974-01-01
The author has identified the following significant results. A reasonably good agreement was found for the radiometric temperatures calculated from the ground truth data and the radiometric temperatures measured by the S192 scanner. This study showed that the S192 scanner data could be used to create good thermal images, particularly with the x-5 detector array.
The Use of Narrative Therapy with Clients Diagnosed with Bipolar Disorder
ERIC Educational Resources Information Center
Ngazimbi, Evadne E.; Lambie, Glenn W.; Shillingford, M. Ann
2008-01-01
Clients diagnosed with bipolar disorder often suffer from mood instability, and research suggests that these clients need both counseling services and pharmacotherapy. Narrative therapy is a social constructionist approach grounded on the premise that there is no single "truth"; individuals may create new meanings and retell their stories to…
ERIC Educational Resources Information Center
Staver, John R.
2010-01-01
Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant…
Over the past three decades, a number of researchers in the fields of environmental justice (EJ) and environmental public health have highlighted the existence of regional and local scale differences in exposure to air pollution, as well as calculated health risk and impacts of a...
Urban vacant land typology: A tool for managing urban vacant land
Gunwoo Kim; Patrick A. Miller; David J. Nowak
2018-01-01
A typology of urban vacant land was developed, using Roanoke, Virginia, as the study area. A comprehensive literature review, field measurements and observations, including photographs, and quantitative based approach to assessing vacant land forest structure and values (i-Tree Eco sampling) were utilized, along with aerial photo interpretation, and ground-truthing...
Shakespeare: Finding and Teaching the Comic Vision.
ERIC Educational Resources Information Center
Lasser, Michael L.
1969-01-01
Comedy is the middle ground upon which the absurd and the serious meet. Concerned with illuminating pain, human imperfection, and man's failure to measure up to his own or the world's concept of perfection, comedy provides "an escape, not from truth but from despair." If tragedy says that some ideals are worth dying for, comedy asserts…
USDA-ARS?s Scientific Manuscript database
Soil moisture is an intrinsic state variable that varies considerably in space and time. Although soil moisture is highly variable, repeated measurements of soil moisture at the field or small watershed scale can often reveal certain locations as being temporally stable and representative of the are...
NASA Technical Reports Server (NTRS)
1971-01-01
Revised Skylab spacecraft, experiments, and mission planning information is presented for the Earth Resources Experiment Package (EREP) users. The major hardware elements and the medical, scientific, engineering, technology and earth resources experiments are described. Ground truth measurements and EREP data handling procedures are discussed. The mission profile, flight planning, crew activities, and aircraft support are also outlined.
BLOND, a building-level office environment dataset of typical electrical appliances.
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-03-27
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.
Donovan, Terrence J.; Termain, Patricia A.; Henry, Mitchell E.
1979-01-01
The Cement oil field, Oklahoma, was a test site for an experiment designed to evaluate LANDSAT's capability to detect an alteration zone in surface rocks caused by hydrocarbon microseepage. Loss of iron and impregnation of sandstone by carbonate cements and replacement of gypsum by calcite are the major alteration phenomena at Cement. The bedrock alterations are partially masked by unaltered overlying beds, thick soils, and dense natural and cultivated vegetation. Interpreters biased by detailed ground truth were able to map the alteration zone subjectively using a magnified, filtered, and sinusoidally stretched LANDSAT composite image; other interpreters, unbiased by ground truth data, could not duplicate that interpretation. Similar techniques were applied at a secondary test site (Garza oil field, Texas), where similar alterations in surface rocks occur. Enhanced LANDSAT images resolved the alteration zone to a biased interpreter and some individual altered outcrops could be mapped using higher resolution SKYLAB color and conventional black and white aerial photographs suggesting repeat experiments with LANDSAT C and D.
Automatic classification techniques for type of sediment map from multibeam sonar data
NASA Astrophysics Data System (ADS)
Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.
2018-02-01
Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.
Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram
NASA Astrophysics Data System (ADS)
Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad
Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J
2015-01-01
Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2011-01-01
Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publiclymore » available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.« less
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
Application of Landsat Thematic Mapper data for coastal thermal plume analysis at Diablo Canyon
NASA Technical Reports Server (NTRS)
Gibbons, D. E.; Wukelic, G. E.; Leighton, J. P.; Doyle, M. J.
1989-01-01
The possibility of using Landsat Thematic Mapper (TM) thermal data to derive absolute temperature distributions in coastal waters that receive cooling effluent from a power plant is demonstrated. Landsat TM band 6 (thermal) data acquired on June 18, 1986, for the Diablo Canyon power plant in California were compared to ground truth temperatures measured at the same time. Higher-resolution band 5 (reflectance) data were used to locate power plant discharge and intake positions and identify locations of thermal pixels containing only water, no land. Local radiosonde measurements, used in LOWTRAN 6 adjustments for atmospheric effects, produced corrected ocean surface radiances that, when converted to temperatures, gave values within approximately 0.6 C of ground truth. A contour plot was produced that compared power plant plume temperatures with those of the ocean and coastal environment. It is concluded that Landsat can provide good estimates of absolute temperatures of the coastal power plant thermal plume. Moreover, quantitative information on ambient ocean surface temperature conditions (e.g., upwelling) may enhance interpretation of numerical model prediction.
Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images.
Nguyen, Luong; Tosun, Akif Burak; Fine, Jeffrey L; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2017-07-01
Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.
Allen, Y.C.; Wilson, C.A.; Roberts, H.H.; Supan, J.
2005-01-01
Sidescan sonar holds great promise as a tool to quantitatively depict the distribution and extent of benthic habitats in Louisiana's turbid estuaries. In this study, we describe an effective protocol for acoustic sampling in this environment. We also compared three methods of classification in detail: mean-based thresholding, supervised, and unsupervised techniques to classify sidescan imagery into categories of mud and shell. Classification results were compared to ground truth results using quadrat and dredge sampling. Supervised classification gave the best overall result (kappa = 75%) when compared to quadrat results. Classification accuracy was less robust when compared to all dredge samples (kappa = 21-56%), but increased greatly (90-100%) when only dredge samples taken from acoustically homogeneous areas were considered. Sidescan sonar when combined with ground truth sampling at an appropriate scale can be effectively used to establish an accurate substrate base map for both research applications and shellfish management. The sidescan imagery presented here also provides, for the first time, a detailed presentation of oyster habitat patchiness and scale in a productive oyster growing area.
NASA Technical Reports Server (NTRS)
Dennis, T. B. (Principal Investigator)
1980-01-01
The author has identified the following significant results. The most apparent contributors to the problem of poor temporal extension of LIST are the drastic changes in the brightness keys and an inadequate set of AI responses in Phase 3. The brightness trajectories change drastically from Phase 3 to the transition year (TY). Removing brightness channels from the discriminant does not completely correct the lack of extendability. Removing brightness increases the accuracy of the extension from Phase 3 to TY from 57.7 percent to 64.18 percent. The removal of the Al keys increases accuracy to 65.76 percent. Although the latter increase appears insignificant when compared to the first, the removal of only the Al keys increased accuracy to 63.58 percent. Proper weighting of the responses explains 73.8 percent of the ground truth labels but only 56.7 percent of the Al labels. By contrast, the TY responses which were weighted to explain the TY ground truth labels fared equally well, explaining 73.6 percent of those labels and 87.1 percent of the Al labels.
BLOND, a building-level office environment dataset of typical electrical appliances
NASA Astrophysics Data System (ADS)
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-03-01
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.
NASA Astrophysics Data System (ADS)
Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.
2015-08-01
The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.
NASA Astrophysics Data System (ADS)
Oommen, T.; Chatterjee, S.
2017-12-01
NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.
Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas
2017-03-18
Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.
BLOND, a building-level office environment dataset of typical electrical appliances
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-01-01
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of. PMID:29583141
Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.
2016-01-01
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012
Hoffman, R A; Kothari, S; Phan, J H; Wang, M D
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.
A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures.
DeCost, Brian L; Holm, Elizabeth A
2016-12-01
This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM) images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article "Characterizing powder materials using keypoint-based computer vision methods" (B.L. DeCost, E.A. Holm, 2016) [1]. These data are freely available in a Mendeley Data archive "A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures" (B.L. DeCost, E.A. Holm, 2016) located at http://dx.doi.org/10.17632/tj4syyj9mr.1[2] for any academic, educational, or research purposes.
Field calibration and validation of remote-sensing surveys
Pe'eri, Shachak; McLeod, Andy; Lavoie, Paul; Ackerman, Seth D.; Gardner, James; Parrish, Christopher
2013-01-01
The Optical Collection Suite (OCS) is a ground-truth sampling system designed to perform in situ measurements that help calibrate and validate optical remote-sensing and swath-sonar surveys for mapping and monitoring coastal ecosystems and ocean planning. The OCS system enables researchers to collect underwater imagery with real-time feedback, measure the spectral response, and quantify the water clarity with simple and relatively inexpensive instruments that can be hand-deployed from a small vessel. This article reviews the design and performance of the system, based on operational and logistical considerations, as well as the data requirements to support a number of coastal science and management projects. The OCS system has been operational since 2009 and has been used in several ground-truth missions that overlapped with airborne lidar bathymetry (ALB), hyperspectral imagery (HSI), and swath-sonar bathymetric surveys in the Gulf of Maine, southwest Alaska, and the US Virgin Islands (USVI). Research projects that have used the system include a comparison of backscatter intensity derived from acoustic (multibeam/interferometric sonars) versus active optical (ALB) sensors, ALB bottom detection, and seafloor characterization using HSI and ALB.
On-orbit characterization of hyperspectral imagers
NASA Astrophysics Data System (ADS)
McCorkel, Joel
Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, P; Schreibmann, E; Fox, T
2014-06-15
Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Gu, X; Lu, W
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less
A Decade Remote Sensing River Bathymetry with the Experimental Advanced Airborne Research LiDAR
NASA Astrophysics Data System (ADS)
Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.; Skinner, K.
2012-12-01
Since 2002, the first generation of the Experimental Advanced Airborne Research LiDAR (EAARL-A) sensor has been deployed for mapping rivers and streams. We present and summarize the results of comparisons between ground truth surveys and bathymetry collected by the EAARL-A sensor in a suite of rivers across the United States. These comparisons include reaches on the Platte River (NE), Boise and Deadwood Rivers (ID), Blue and Colorado Rivers (CO), Klamath and Trinity Rivers (CA), and the Shenandoah River (VA). In addition to diverse channel morphologies (braided, single thread, and meandering) these rivers possess a variety of substrates (sand, gravel, and bedrock) and a wide range of optical characteristics which influence the attenuation and scattering of laser energy through the water column. Root mean square errors between ground truth elevations and those measured by the EAARL-A ranged from 0.15-m in rivers with relatively low turbidity and highly reflective sandy bottoms to over 0.5-m in turbid rivers with less reflective substrates. Mapping accuracy with the EAARL-A has proved challenging in pools where bottom returns are either absent in waveforms or are of such low intensity that they are treated as noise by waveform processing algorithms. Resolving bathymetry in shallow depths where near surface and bottom returns are typically convolved also presents difficulties for waveform processing routines. The results of these evaluations provide an empirical framework to discuss the capabilities and limitations of the EAARL-A sensor as well as previous generations of post-processing software for extracting bathymetry from complex waveforms. These experiences and field studies not only provide benchmarks for the evaluation of the next generation of bathymetric LiDARs for use in river mapping, but also highlight the importance of developing and standardizing more rigorous methods to characterize substrate reflectance and in-situ optical properties at study sites. They also point out the continued necessity of ground truth data for algorithm refinement and survey verification.
2014-01-01
Background Recently it was shown that retinal vessel diameters could be measured using spectral domain optical coherence tomography (OCT). It has also been suggested that retinal vessels manifest different features on spectral domain OCT (SD-OCT) depending on whether they are arteries or veins. Our study was aimed to present a reliable SD-OCT assisted method of differentiating retinal arteries from veins. Methods Patients who underwent circular OCT scans centred at the optic disc using a Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany) were retrospectively reviewed. Individual retinal vessels were identified on infrared reflectance (IR) images and given unique labels for subsequent grading. Vessel types (artery, vein or uncertain) assessed by IR and/or fluorescein angiography (FA) were referenced as ground truth. From OCT, presence/absence of the hyperreflective lower border reflectivity feature was assessed. Presence of this feature was considered indicative for retinal arteries and compared with the ground truth. Results A total of 452 vessels from 26 eyes of 18 patients were labelled and 398 with documented vessel type (302 by IR and 96 by FA only) were included in the study. Using SD-OCT, 338 vessels were assigned a final grade, of which, 86.4% (292 vessels) were classified correctly. Forty three vessels (15 arteries and 28 veins) that IR failed to differentiate were correctly classified by SD-OCT. When using only IR based ground truth for vessel type the SD-OCT based classification approach reached a sensitivity of 0.8758/0.9297, and a specificity of 0.9297/0.8758 for arteries/veins, respectively. Conclusion Our method was able to classify retinal arteries and veins with a commercially available SD-OCT alone, and achieved high classification performance. Paired with OCT based vessel measurements, our study has expanded the potential clinical implication of SD-OCT in evaluation of a variety of retinal and systemic vascular diseases. PMID:24884611
Energy accounting and optimization for mobile systems
NASA Astrophysics Data System (ADS)
Dong, Mian
Energy accounting determines how much a software process contributes to the total system energy consumption. It is the foundation for evaluating software and has been widely used by operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track every hardware use by software in a heterogeneous multi-core system like modern smartphones and tablets. In this thesis, we provide the ground truth for energy accounting based on multi-player game theory and offer the first evaluation of existing energy accounting policies, revealing their important flaws. The proposed ground truth is based on Shapley value, a single value solution to multi-player games of which four axiomatic properties are natural and self-evident to energy accounting. To obtain the Shapley value-based ground truth, one only needs to know if a process is active during the time under question and the system energy consumption during the same time. We further provide a utility optimization formulation of energy management and show, surprisingly, that energy accounting does not matter for existing energy management solutions that control the energy use of a process by giving it an energy budget, or budget based energy management (BEM). We show an optimal energy management (OEM) framework can always outperform BEM. While OEM does not require any form of energy accounting, it is related to Shapley value in that both require the system energy consumption for all possible combination of processes under question. We provide a novel system solution that meet this requirement by acquiring system energy consumption in situ for an OS scheduler period, i.e.,10 ms. We report a prototype implementation of both Shapley value-based energy accounting and OEM based scheduling. Using this prototype and smartphone workload, we experimentally demonstrate how erroneous existing energy accounting policies can be, show that existing BEM solutions are unnecessarily complicated yet underperforming by 20% compared to OEM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
Ground truth and detection threshold from WWII naval clean-up in Denmark
NASA Astrophysics Data System (ADS)
Larsen, Tine B.; Dahl-Jensen, Trine; Voss, Peter
2013-04-01
The sea bed below the Danish territorial waters is still littered with unexploded mines and other ammunition from World War II. The mines were air dropped by the RAF and the positions of the mines are unknown. As the mines still pose a potential threat to fishery and other marine activities, the Admiral Danish Fleet under the Danish Navy searches for the mines and destroy them by detonation, where they are found. The largest mines destroyed in this manner in 2012 are equivalent to 800 kg TNT each. The Seismological Service at the National Geological Survey of Denmark and Greenland is notified by the navy when ammunition in excess of 100 kg TNT is detonated. The notifications include information about position, detonation time and the estimated amount of explosives. The larger explosions are clearly registered not only on the Danish seismographs, but also on seismographs in the neighbouring countries. This includes the large seismograph arrays in Norway, Sweden, and Finland. Until recently the information from the Danish navy was only utilized to rid the Danish earthquake catalogue of explosions. But the high quality information provided by the navy enables us to use these ground truth events to assess the quality of our earthquake catalogue. The mines are scattered though out the Danish territorial waters, thus we can use the explosions to test the accuracy of the determined epicentres in all parts of the country. E.g. a detonation of 135 kg in Begstrup Vig in the central part of Denmark was located using Danish, Norwegian and Swedish stations with an accuracy of less than 2 km from ground truth. A systematic study of the explosions will sharpen our understanding of the seismicity in Denmark, and result in a more detailed understanding of the detection threshold. Furthermore the study will shed light on the sensitivity of the network to various seismograph outages.
Stephens, David; Diesing, Markus
2014-01-01
Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.
TU-F-17A-03: A 4D Lung Phantom for Coupled Registration/Segmentation Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; El Naqa, I; Levesque, I
2014-06-15
Purpose: Coupling the processes of segmentation and registration (regmentation) is a recent development that allows improved efficiency and accuracy for both steps and may improve the clinical feasibility of online adaptive radiotherapy. Presented is a multimodality animal tissue model designed specifically to provide a ground truth to simultaneously evaluate segmentation and registration errors during respiratory motion. Methods: Tumor surrogates were constructed from vacuum sealed hydrated natural sea sponges with catheters used for the injection of PET radiotracer. These contained two compartments allowing for two concentrations of radiotracer mimicking both tumor and background signals. The lungs were inflated to different volumesmore » using an air pump and flow valve and scanned using PET/CT and MRI. Anatomical landmarks were used to evaluate the registration accuracy using an automated bifurcation tracking pipeline for reproducibility. The bifurcation tracking accuracy was assessed using virtual deformations of 2.6 cm, 5.2 cm and 7.8 cm of a CT scan of a corresponding human thorax. Bifurcations were detected in the deformed dataset and compared to known deformation coordinates for 76 points. Results: The bifurcation tracking accuracy was found to have a mean error of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior axes using a 1×1×5 mm3 resolution after the CT volume was deformed 7.8 cm. The tumor surrogates provided a segmentation ground truth after being registered to the phantom image. Conclusion: A swine lung model in conjunction with vacuum sealed sponges and a bifurcation tracking algorithm is presented that is MRI, PET and CT compatible and anatomically and kinetically realistic. Corresponding software for tracking anatomical landmarks within the phantom shows sub-voxel accuracy. Vacuum sealed sponges provide realistic tumor surrogate with a known boundary. A ground truth with minimal uncertainty is thus realized that can be used for comparing the performance of registration and segmentation algorithms.« less
Law, Bradley; Caccamo, Gabriele; Roe, Paul; Truskinger, Anthony; Brassil, Traecey; Gonsalves, Leroy; McConville, Anna; Stanton, Matthew
2017-09-01
Species distribution models have great potential to efficiently guide management for threatened species, especially for those that are rare or cryptic. We used MaxEnt to develop a regional-scale model for the koala Phascolarctos cinereus at a resolution (250 m) that could be used to guide management. To ensure the model was fit for purpose, we placed emphasis on validating the model using independently-collected field data. We reduced substantial spatial clustering of records in coastal urban areas using a 2-km spatial filter and by modeling separately two subregions separated by the 500-m elevational contour. A bias file was prepared that accounted for variable survey effort. Frequency of wildfire, soil type, floristics and elevation had the highest relative contribution to the model, while a number of other variables made minor contributions. The model was effective in discriminating different habitat suitability classes when compared with koala records not used in modeling. We validated the MaxEnt model at 65 ground-truth sites using independent data on koala occupancy (acoustic sampling) and habitat quality (browse tree availability). Koala bellows ( n = 276) were analyzed in an occupancy modeling framework, while site habitat quality was indexed based on browse trees. Field validation demonstrated a linear increase in koala occupancy with higher modeled habitat suitability at ground-truth sites. Similarly, a site habitat quality index at ground-truth sites was correlated positively with modeled habitat suitability. The MaxEnt model provided a better fit to estimated koala occupancy than the site-based habitat quality index, probably because many variables were considered simultaneously by the model rather than just browse species. The positive relationship of the model with both site occupancy and habitat quality indicates that the model is fit for application at relevant management scales. Field-validated models of similar resolution would assist in guiding management of conservation-dependent species.
Dual-Tracer PET Using Generalized Factor Analysis of Dynamic Sequences
Fakhri, Georges El; Trott, Cathryn M.; Sitek, Arkadiusz; Bonab, Ali; Alpert, Nathaniel M.
2013-01-01
Purpose With single-photon emission computed tomography, simultaneous imaging of two physiological processes relies on discrimination of the energy of the emitted gamma rays, whereas the application of dual-tracer imaging to positron emission tomography (PET) imaging has been limited by the characteristic 511-keV emissions. Procedures To address this limitation, we developed a novel approach based on generalized factor analysis of dynamic sequences (GFADS) that exploits spatio-temporal differences between radiotracers and applied it to near-simultaneous imaging of 2-deoxy-2-[18F]fluoro-D-glucose (FDG) (brain metabolism) and 11C-raclopride (D2) with simulated human data and experimental rhesus monkey data. We show theoretically and verify by simulation and measurement that GFADS can separate FDG and raclopride measurements that are made nearly simultaneously. Results The theoretical development shows that GFADS can decompose the studies at several levels: (1) It decomposes the FDG and raclopride study so that they can be analyzed as though they were obtained separately. (2) If additional physiologic/anatomic constraints can be imposed, further decomposition is possible. (3) For the example of raclopride, specific and nonspecific binding can be determined on a pixel-by-pixel basis. We found good agreement between the estimated GFADS factors and the simulated ground truth time activity curves (TACs), and between the GFADS factor images and the corresponding ground truth activity distributions with errors less than 7.3±1.3 %. Biases in estimation of specific D2 binding and relative metabolism activity were within 5.9±3.6 % compared to the ground truth values. We also evaluated our approach in simultaneous dual-isotope brain PET studies in a rhesus monkey and obtained accuracy of better than 6 % in a mid-striatal volume, for striatal activity estimation. Conclusions Dynamic image sequences acquired following near-simultaneous injection of two PET radiopharmaceuticals can be separated into components based on the differences in the kinetics, provided their kinetic behaviors are distinct. PMID:23636489
Identifying retail food stores to evaluate the food environment.
Hosler, Akiko S; Dharssi, Aliza
2010-07-01
The availability of food stores is the most frequently used measure of the food environment, but identifying them poses a technical challenge. This study evaluated eight administrative lists of retailers for identifying food stores in an urban community. Lists of inspected food stores (IFS), cigarette retailers, liquor licenses, lottery retailers, gasoline retailers, farmers' markets, and authorized WIC (Program for Women, Infants, and Children) and Supplemental Nutrition Assistance Program (SNAP) retailers for Albany NY were obtained from government agencies. Sensitivity and positive predictive value (PPV) were assessed, using ground-truthing as the validation measure. Stores were also grouped by the number of lists they were documented on, and the proportion of food stores in each group was obtained. Data were collected and analyzed in 2009. A total of 166 stores, including four from ground-truthing, were identified. Forty-three stores were disqualified, as a result of having no targeted foods (n=17); being in the access-restricted area of a building (n=15); and being out of business (n=11). Sensitivity was highest in IFS (87.0%), followed by the cigarette retailers' list (76.4%). PPV was highest in WIC and farmers' markets lists (100%), followed by SNAP (97.8%). None of the lists had both sensitivity and PPV greater than 90%. All stores that were listed by four or more lists were food stores. The proportion of food stores was lowest (33.3%) for stores listed by only one list. Individual lists had limited utility for identifying food stores, but when they were combined, the likelihood of a retail store being a food store could be predicted by the number of lists the store was documented on. This information can be used to increase the efficiency of ground-truthing. Copyright 2010 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Progress in utilization of a mobile laboratory for making storm electricity measurements
NASA Technical Reports Server (NTRS)
Rust, W. David
1988-01-01
A mobile atmospheric science laboratory has been used to intercept and track storms on the Great Plains region of the U.S., with the intention of combining the data obtained with those from Doppler and conventional radars, NASA U-2 aircraft overflights, balloon soundings, and fixed-base storm electricity measurements. The mobile lab has proven to be valuable in the gathering of ground truth verifications for the two commercially operated lightning ground-strike locating systems. Data acquisition has recently been expanded by means of mobile ballooning before and during storms.
NASA Technical Reports Server (NTRS)
1981-01-01
General information and administrative instructions are provided for individuals gathering ground truth data to support research and development techniques for estimating crop acreage and production by remote sensing by satellite. Procedures are given for personal safety with regards to organophosphorus insecticides, for conducting interviews for periodic observations, for coding the crops identified and their growth stages, and for selecting sites for placing rain gages. Forms are included for those citizens agreeing to monitor the gages and record the rainfall. Segment selection is also considered.
Impact of Conifer Forest Litter on Microwave Emission at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; O'Neill, Peggy E.; Lang, Roger H.; Cosh, Michael H.; Joseph, Alicia T.; Jackson, Thomas J.
2011-01-01
This study reports on the utilization of microwave modeling, together with ground truth, and L-band (1.4-GHz) brightness temperatures to investigate the passive microwave characteristics of a conifer forest floor. The microwave data were acquired over a natural Virginia Pine forest in Maryland by a ground-based microwave active/passive instrument system in 2008/2009. Ground measurements of the tree biophysical parameters and forest floor characteristics were obtained during the field campaign. The test site consisted of medium-sized evergreen conifers with an average height of 12 m and average diameters at breast height of 12.6 cm. The site is a typical pine forest site in that there is a surface layer of loose debris/needles and an organic transition layer above the mineral soil. In an effort to characterize and model the impact of the surface litter layer, an experiment was conducted on a day with wet soil conditions, which involved removal of the surface litter layer from one half of the test site while keeping the other half undisturbed. The observations showed detectable decrease in emissivity for both polarizations after the surface litter layer was removed. A first-order radiative transfer model of the forest stands including the multilayer nature of the forest floor in conjunction with the ground truth data are used to compute forest emission. The model calculations reproduced the major features of the experimental data over the entire duration, which included the effects of surface litter and ground moisture content on overall emission. Both theory and experimental results confirm that the litter layer increases the observed canopy brightness temperature and obscure the soil emission.
Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm
NASA Astrophysics Data System (ADS)
Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko
2013-04-01
Real time forecasting of river flow is an essential tool in operational water management. Such real time modelling systems require well calibrated models which can make use of spatially distributed rainfall observations. Weather radars provide spatial data, however, since radar measurements are sensitive to a large range of error sources, often a discrepancy between radar observations and ground-based measurements, which are mostly considered as ground truth, can be observed. Through merging ground observations with the radar product, often referred to as data merging, one may force the radar observations to better correspond to the ground-based measurements, without losing the spatial information. In this paper, radar images and ground-based measurements of rainfall are merged based on interpolated gauge-adjustment factors (Moore et al., 1998; Cole and Moore, 2008) or scaling factors. Using the following equation, scaling factors (C(xα)) are calculated at each position xα where a gauge measurement (Ig(xα)) is available: Ig(xα)+-? C (xα) = Ir(xα)+ ? (1) where Ir(xα) is the radar-based observation in the pixel overlapping the rain gauge and ? is a constant making sure the scaling factor can be calculated when Ir(xα) is zero. These scaling factors are interpolated on the radar grid, resulting in a unique scaling factor for each pixel. Multiquadric surface fitting is used as an interpolation algorithm (Hardy, 1971): C*(x0) = aTv + a0 (2) where C*(x0) is the prediction at location x0, the vector a (Nx1, with N the number of ground-based measurements used) and the constant a0 parameters describing the surface and v an Nx1 vector containing the (Euclidian) distance between each point xα used in the interpolation and the point x0. The parameters describing the surface are derived by forcing the surface to be an exact interpolator and impose that the sum of the parameters in a should be zero. However, often, the surface is allowed to pass near the observations (i.e. the observed scaling factors C(xα)) on a distance aαK by introducing an offset parameter K, which results in slightly different equations to calculate a and a0. The described technique is currently being used by the Flemish Environmental Agency in an online forecasting system of river discharges within Flanders (Belgium). However, rescaling the radar data using the described algorithm is not always giving rise to an improved weather radar product. Probably one of the main reasons is the parameters K and ? which are implemented as constants. It can be expected that, among others, depending on the characteristics of the rainfall, different values for the parameters should be used. Adaptation of the parameter values is achieved by an online calibration of K and ? at each time step (every 15 minutes), using validated rain gauge measurements as ground truth. Results demonstrate that rescaling radar images using optimized values for K and ? at each time step lead to a significant improvement of the rainfall estimation, which in turn will result in higher quality discharge predictions. Moreover, it is shown that calibrated values for K and ? can be obtained in near-real time. References Cole, S. J., and Moore, R. J. (2008). Hydrological modelling using raingauge- and radar-based estimators of areal rainfall. Journal of Hydrology, 358(3-4), 159-181. Hardy, R.L., (1971) Multiquadric equations of topography and other irregular surfaces, Journal of Geophysical Research, 76(8): 1905-1915. Moore, R. J., Watson, B. C., Jones, D. A. and Black, K. B. (1989). London weather radar local calibration study. Technical report, Institute of Hydrology.
Evaluation of motion artifact metrics for coronary CT angiography.
Ma, Hongfeng; Gros, Eric; Szabo, Aniko; Baginski, Scott G; Laste, Zachary R; Kulkarni, Naveen M; Okerlund, Darin; Schmidt, Taly G
2018-02-01
This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best-phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground-truth motion artifact scores from a series of pairwise comparisons. Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low-Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine-filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground-truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground-truth reader score. The Kendall's Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. On phantom images, the Kendall's Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall's Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall's Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall's Tau coefficient of 0.65. The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images. © 2017 American Association of Physicists in Medicine.
Technical note: tree truthing: how accurate are substrate estimates in primate field studies?
Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J
2012-04-01
Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. Copyright © 2012 Wiley Periodicals, Inc.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Solution to the spectral filter problem of residual terrain modelling (RTM)
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon
2018-06-01
In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Image resolution: Its significance in a wildland area
NASA Technical Reports Server (NTRS)
Lauer, D. T.; Thaman, R. R.
1970-01-01
The information content of simulated space photos as a function of various levels of image resolution was determined by identifying major vegetation-terrain types in a series of images purposely degraded optically to different levels of ground resolution resolvable distance. Comparison of cumulative interpretation results with actual ground truth data indicates that although there is definite decrease in interpretability as ground resolvable distance increases, some valuable information is gained by using even the poorest aerial photography. Developed is the importance of shape and texture for correct identification of broadleaf or coniferous vegetation types and the relative unimportance of shape and texture for the recognition of grassland, water bodies, and nonvegetated areas. Imagery must have a ground resolvable distance of at least 50 feet to correctly discriminate between primary types of woody vegetation.
Realism without truth: a review of Giere's science without laws and scientific perspectivism.
Hackenberg, Timothy D
2009-05-01
An increasingly popular view among philosophers of science is that of science as action-as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.
REALISM WITHOUT TRUTH: A REVIEW OF GIERE'S SCIENCE WITHOUT LAWS AND SCIENTIFIC PERSPECTIVISM
Hackenberg, Timothy D
2009-01-01
An increasingly popular view among philosophers of science is that of science as action—as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science. PMID:19949495
NASA Astrophysics Data System (ADS)
Negraru, Petru; Golden, Paul
2017-04-01
Long-term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, traveltime and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90 per cent of the detonations have traveltimes indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27 per cent of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modeling through the Naval Research Laboratory Ground to Space atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent traveltime bias. In the end, we show the possible improvement in location using empirically calibrated traveltime and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90 per cent credibility contours by a factor of 2.5.
MAX-91: Polarimetric SAR results on Montespertoli site
NASA Technical Reports Server (NTRS)
Baronti, S.; Luciani, S.; Moretti, S.; Paloscia, S.; Schiavon, G.; Sigismondi, S.
1993-01-01
The polarimetric Synthetic Aperture Radar (SAR) is a powerful sensor for high resolution ocean and land mapping and particularly for monitoring hydrological parameters in large watersheds. There is currently much research in progress to assess the SAR operational capability as well as to estimate the accuracy achievable in the measurements of geophysical parameters with the presently available airborne and spaceborne sensors. An important goal of this research is to improve our understanding of the basic mechanisms that control the interaction of electro-magnetic waves with soil and vegetation. This can be done both by developing electromagnetic models and by analyzing statistical relations between backscattering and ground truth data. A systematic investigation, which aims at a better understanding of the information obtainable from the multi-frequency polarimetric SAR to be used in agro-hydrology, is in progress by our groups within the framework of SIR-C/X-SAR Project and has achieved a most significant milestone with the NASA/JPL Aircraft Campaign named MAC-91. Indeed this experiment allowed us to collect a large and meaningful data set including multi-temporal multi-frequency polarimetric SAR measurements and ground truth. This paper presents some significant results obtained over an agricultural flat area within the Montespertoli site, where intensive ground measurements were carried out. The results are critically discussed with special regard to the information associated with polarimetric data.
Ground truth methods for optical cross-section modeling of biological aerosols
NASA Astrophysics Data System (ADS)
Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.
2011-05-01
Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.
Jaton, Florian
2017-01-01
This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802
Ground Truth Mineralogy vs. Orbital Observations at the Bagnold Dune Field
NASA Technical Reports Server (NTRS)
Achilles, C. N.; Downs, R. T.; Ming, D. W.; Rampe, E. B.; Morris, R. V.; Treiman, A. H.; Morrison, S. M.; Blake, D. F.; Vaniman, D. T.; Bristow, T. F.
2017-01-01
The Mars Science Laboratory (MSL) rover, Curiosity, is analyzing rock and sediments in Gale crater to provide in situ sedimentological, geochemical, and mineralogical assessments of the crater's geologic history. Curiosity's recent traverse through an active, basaltic eolian deposit, informally named the Bagnold Dunes, provided the opportunity for a multi-instrument investigation of the dune field.
Accuracy assessment of percent canopy cover, cover type, and size class
H. T. Schreuder; S. Bain; R. C. Czaplewski
2003-01-01
Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...
Man and the Biosphere: Ground Truthing Coral Reefs for the St. John Island Biosphere Reserve.
ERIC Educational Resources Information Center
Brody, Michael J.; And Others
Research on the coral species composition of St. John's reefs in the Virgin Islands was conducted through the School for Field Studies (SFS) Coral Reef Ecology course (winter 1984). A cooperative study program based on the United Nations Educational, Scientific, and Cultural Organization's (Unesco) program, Man and the Biosphere, was undertaken by…
Healing the Past through Story
ERIC Educational Resources Information Center
Mullet, Judy H.; Akerson, Nels M. K.; Turman, Allison
2013-01-01
Stories matter, and the stories we tell ourselves matter most. Truth has many layers and narrative helps us makes senses of our multilayered reality. We live a personal narrative that is grounded in our past experience, but embodied in our present. As such, it filters what we see and how we interpret events. Attachment theorists tell us our early…
Data and Network Science for Noisy Heterogeneous Systems
ERIC Educational Resources Information Center
Rider, Andrew Kent
2013-01-01
Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…
Cloud Study Investigators: Using NASA's CERES S'COOL in Problem-Based Learning
ERIC Educational Resources Information Center
Moore, Susan; Popiolkowski, Gary
2011-01-01
1This article describes how, by incorporating NASA's Students' Cloud Observations On-Line (S'COOL) project into a problem-based learning (PBL) activity, middle school students are engaged in authentic scientific research where they observe and record information about clouds and contribute ground truth data to NASA's Clouds and the Earth's…
Forest statistics for Arkansas' delta counties
Richard T. Quick; Mary S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 21 counties comprising the North and South Delta Units of Arkansas (fig. 1). Forest area was estimated from aerial photos with an adjustment for ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid oriented roughly N-S and E-W. At...
Forest statistics for Arkansas' Ouachita counties
T. Richard Quick; Marry S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 10 counties comprising the Quachita Unit of Arkansas (fig. 1). Forest area was estimated from aerial photos with an adjustment of ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid orientated roughly N-S and E-W. At each locations,...
Forest statistics for Arkansas' Ozark counties
T. Richard Quick; Mary S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 24 counties comprising the Ozark Unit of Arkansas (fib. 1). Forest area was estimated from aerial photos with an adjustment of ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid orientated roughly N-S and E-W. At each location,...
Exploring Meaning in Life in the Tel Hai Gifted Children’s Center
ERIC Educational Resources Information Center
Kasler, Jon; Goldfarb-Rivlin, Sima; Levi, Jossef; Elias, Maurice J.
2013-01-01
While high IQ is likely to be an advantage in moral reasoning, it does not guarantee students' putting those morals into practice. A clearly defined sense of purpose grounded in values of social responsibility, exploration of values, and the search for ultimate truths, both personal and collective, is paramount. The Meaning in Life program in…
Modifications to the accuracy assessment analysis routine SPATL to produce an output file
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1978-01-01
The SPATL is an analysis program in the Accuracy Assessment Software System which makes comparisons between ground truth information and dot labeling for an individual segment. In order to facilitate the aggregation cf this information, SPATL was modified to produce a disk output file containing the necessary information about each segment.
No Neutral Ground: Standing by the Values We Prize in Higher Education.
ERIC Educational Resources Information Center
Young, Robert B.
This book is a call to those within higher education to remain clear and consistent about the core values--service, truth, freedom, equality, individuation, justice, and community--that play a critical role in American society. It provides suggestions to help administrators and faculty to incorporate these values into their own practice and…
Margaret Brittingham; Patrick Drohan; Joseph Bishop
2013-01-01
Marcellus shale development is occurring rapidly across Pennsylvania. We conducted a geographic information system (GIS) analysis using available Pennsylvania Department of Environmental Protection permit data, before and after photos, ground-truthing, and fi eld measurements to describe landscape change within the fi rst 3 years of active Marcellus exploration and...
Application of LANDSAT TM images to assess circulation and dispersion in coastal lagoons
NASA Technical Reports Server (NTRS)
Kjerfve, B.; Jensen, J. R.; Magill, K. E.
1986-01-01
The main objectives are formulated around a four pronged work approach, consisting of tasks related to: image processing and analysis of LANDSAT thematic mapping; numerical modeling of circulation and dispersion; hydrographic and spectral radiation field sampling/ground truth data collection; and special efforts to focus the investigation on turbid coastal/estuarine fronts.
Regrounding in Place: Paths to Native American Truths at the Margins
ERIC Educational Resources Information Center
Lucas, Michael
2013-01-01
Margin acts as ground to receive the figure of the text. Margin is initially unreadable, but as suggested by gestalt studies, may be reversed, or regrounded. A humanities course, "Native American Architecture and Place," was created for a polytechnic student population, looking to place as an inroad for access to the margins of a better…
SpinSat Mission Ground Truth Characterization
2014-09-01
launch via the SpaceX Falcon 9 CRS4 mission on 12 Sept 2014 and is to be deployed from the International Space Station (ISS) on 29 Sept. 2014. 2...ISS as part of the soft-stow cargo allotment on the SpaceX Dragon spacecraft launched by the SpaceX Falcon 9 two stage to orbit launch vehicle during
Pina, Pedro; Vieira, Gonçalo; Bandeira, Lourenço; Mora, Carla
2016-12-15
The ice-free areas of Maritime Antarctica show complex mosaics of surface covers, with wide patches of diverse bare soils and rock, together with various vegetation communities dominated by lichens and mosses. The microscale variability is difficult to characterize and quantify, but is essential for ground-truthing and for defining classifiers for large areas using, for example high resolution satellite imagery, or even ultra-high resolution unmanned aerial vehicle (UAV) imagery. The main objective of this paper is to verify the ability and robustness of an automated approach to discriminate the variety of surface types in digital photographs acquired at ground level in ice-free regions of Maritime Antarctica. The proposed method is based on an object-based classification procedure built in two main steps: first, on the automated delineation of homogeneous regions (the objects) of the images through the watershed transform with adequate filtering to avoid an over-segmentation, and second, on labelling each identified object with a supervised decision classifier trained with samples of representative objects of ice-free surface types (bare rock, bare soil, moss and lichen formations). The method is evaluated with images acquired in summer campaigns in Fildes and Barton peninsulas (King George Island, South Shetlands). The best performances for the datasets of the two peninsulas are achieved with a SVM classifier with overall accuracies of about 92% and kappa values around 0.89. The excellent performances allow validating the adequacy of the approach for obtaining accurate surface reference data at the complete pixel scale (sub-metric) of current very high resolution (VHR) satellite images, instead of a common single point sampling. Copyright © 2016 Elsevier B.V. All rights reserved.
A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction
Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim
2015-01-01
Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918
Patient identification using a near-infrared laser scanner
NASA Astrophysics Data System (ADS)
Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris
2017-03-01
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
NASA Astrophysics Data System (ADS)
Jaumann, Ralf; Bibring, Jean-Piere; Glassmeier, Karl-Heiz; Grott, Mathias; Ho, Tra-Mi; Ulamec, Stefan; Schmitz, Nicole; Auster, Ulrich; Biele, Jens; Kuninaka, Hitoshi; Okada, Tatsuaki; Yoshikawa, Makoto; Watanabe, Sei-ichiro; Fujimoto, Masaki; Spohn, Tilman; Koncz, Aalexander; Hercik, Davis; Michaelis, Harald
2015-04-01
MASCOT, a Mobile Asteroid Surface Scout, will support JAXA's Hayabusa 2 mission to investigate the C-type asteroid 1999 JU3 (1). The German Aer-ospace Center (DLR) develops MASCOT with contributions from CNES (France) (2,3,4). Main objective is to in-situ map the asteroid's geomorphol-ogy, the intimate mixture, texture and composition of the regolith (dust, soil and rocks), and the thermal, mechanical, and magnetic properties of the sur-face in order to provide ground truth for the orbiter remote measurements, support the selection of sampling sites, and provide context information for the returned samples. MASCOT comprises a payload of four scientific in-struments: camera, radiometer, magnetometer and hyperspectral microscope. C- and D-type asteroids hold clues to the origin of the solar system, the for-mation of planets, the origins of water and life on Earth, the protection of Earth from impacts, and resources for future human exploration. C- and D-types are dark and difficult to study from Earth, and have only been glimpsed by spacecraft. While results from recent missions (e.g., Hayabusa, NEAR (5, 6, 7)) have dramatically increased our understanding of asteroids, important questions remain open. For example, characterizing the properties of asteroid regolith in-situ would deliver important ground truth for further understanding telescopic and orbital observations and samples of such asteroids. MASCOT will descend and land on the asteroid and will change its own position up to two times by hopping. This enables measurements during descent, at the landing and hopping positions #1-3, and during hopping. Hayabusa 2 together with MASCOT launched December 3rd 2014, will arrive at 1999JU3 in 2018 and return samples back to Earth in 2020. References: (1) Vilas, F., Astronomical J. 1101-1105, 2008; (2) Ulamec, S., et al., Acta Astronautica, Vol. 93, pp. 460-466; (3) Jaumann et al., 45th LPSC, #1812, Houston; (4) Ho et al., 45th LPSC, #2535, Houston; (5) Spe-cial Issue, Science, Vol. 312 no. 5778, 2006; (6) Special Issue Science, Vol. 333 no. 6046, 2011. (7) Bell, L., Mitton, J-., Cambridge Univ. Press, 2002.
Assessing uncertainties of GRACE-derived terrestrial water-storage fields
NASA Astrophysics Data System (ADS)
Fereria, Vagner; Montecino, Henry
2017-04-01
Space-borne sensors are producing many remotely sensed data and, consequently, different measurements of the same field are available to end users. Furthermore, different satellite processing centres are producing extensive products based on the data of only one mission. This is exactly the case with the Gravity Recovery and Climate Experiment (GRACE) mission, which has been monitoring terrestrial water storage (TWS) since April 2002, while the Centre for Space Research (CSR), the Jet Propulsion Laboratory (JPL), the GeoForschungsZentrum (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), among others, provide individual monthly solutions in the form of Stokes's coefficients. The inverted TWS maps from Stokes's coefficients are being used in many applications and, therefore, as no ground truth data exist, the uncertainties are unknown. An assessment of the uncertainties associated with these different products is mandatory in order to guide data producers and support the users to choose the best dataset. However, the estimation of uncertainties of space-borne products often relies on ground truth data, and in the absence of such data, an assessment of their qualities is a challenge. A recent study (Ferreira et al. 2016) evaluates the quality of each processing centre (CSR, JPL, GFZ, and GRGS) by estimating their individual uncertainties using a generalised formulation of the three-cornered hat (TCH) method. It was found that the TCH results for the study period of August 2002 to June 2014 indicate that on a global scale, the CSR, GFZ, GRGS, and JPL present uncertainties of 9.4, 13.7, 14.8, and 13.2 mm, respectively. On a basin scale, the overall good performance of the CSR is observed at 91 river basins. The TCH-based results are confirmed by a comparison with an ensemble solution from the four GRACE processing centres. Reference Ferreira VG, Montecino HDC, Yakubu CI and Heck B (2016) Uncertainties of the Gravity Recovery and Climate Experiment time-variable gravity-field solutions based on three-cornered hat method. Journal of Applied Remote Sensing, 10(1), pp 015015-(1-20). doi: 10.1117/1.JRS.10.015015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q; Zhang, Y; Liu, Y
2014-06-15
Purpose: Hyperpolarized gas (HP) tagging MRI is a novel imaging technique for direct measurement of lung motion during breathing. This study aims to quantitatively evaluate the accuracy of deformable image registration (DIR) in lung motion estimation using HP tagging MRI as references. Methods: Three healthy subjects were imaged using the HP MR tagging, as well as a high-resolution 3D proton MR sequence (TrueFISP) at the end-of-inhalation (EOI) and the end-of-exhalation (EOE). Ground truth of lung motion and corresponding displacement vector field (tDVF) was derived from HP tagging MRI by manually tracking the displacement of tagging grids between EOI and EOE.more » Seven different DIR methods were applied to the high-resolution TrueFISP MR images (EOI and EOE) to generate the DIR-based DVFs (dDVF). The DIR methods include Velocity (VEL), MIM, Mirada, multi-grid B-spline from Elastix (MGB) and 3 other algorithms from DIRART toolbox (Double Force Demons (DFD), Improved Lucas-Kanade (ILK), and Iterative Optical Flow (IOF)). All registrations were performed by independent experts. Target registration error (TRE) was calculated as tDVF – dDVF. Analysis was performed for the entire lungs, and separately for the upper and lower lungs. Results: Significant differences between tDVF and dDVF were observed. Besides the DFD and IOF algorithms, all other dDVFs showed similarity in deformation magnitude distribution but away from the ground truth. The average TRE for entire lung ranged 2.5−23.7mm (mean=8.8mm), depending on the DIR method and subject's breathing amplitude. Larger TRE (13.3–23.7mm) was found in subject with larger breathing amplitude of 45.6mm. TRE was greater in lower lung (2.5−33.9 mm, mean=12.4mm) than that in upper lung (2.5−11.9 mm, mean=5.8mm). Conclusion: Significant differences were observed in lung motion estimation between the HP gas tagging MRI method and the DIR methods, especially when lung motion is large. Large variation among different DIR methods was also observed.« less
NASA Astrophysics Data System (ADS)
Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.
2016-03-01
Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution, particularly within the context of the new tidal barrier being built. However, they had remained largely unexplored until now, because of the surveying challenges. The application of this remote sensing approach, combined with targeted sampling, opens a new perspective in the monitoring of benthic habitats in view of a knowledge-based management of natural resources in shallow coastal areas.
NASA Astrophysics Data System (ADS)
Remy, Charlotte; Lalonde, Arthur; Béliveau-Nadeau, Dominic; Carrier, Jean-François; Bouchard, Hugo
2018-01-01
The purpose of this study is to evaluate the impact of a novel tissue characterization method using dual-energy over single-energy computed tomography (DECT and SECT) on Monte Carlo (MC) dose calculations for low-dose rate (LDR) prostate brachytherapy performed in a patient like geometry. A virtual patient geometry is created using contours from a real patient pelvis CT scan, where known elemental compositions and varying densities are overwritten in each voxel. A second phantom is made with additional calcifications. Both phantoms are the ground truth with which all results are compared. Simulated CT images are generated from them using attenuation coefficients taken from the XCOM database with a 100 kVp spectrum for SECT and 80 and 140Sn kVp for DECT. Tissue segmentation for Monte Carlo dose calculation is made using a stoichiometric calibration method for the simulated SECT images. For the DECT images, Bayesian eigentissue decomposition is used. A LDR prostate brachytherapy plan is defined with 125I sources and then calculated using the EGSnrc user-code Brachydose for each case. Dose distributions and dose-volume histograms (DVH) are compared to ground truth to assess the accuracy of tissue segmentation. For noiseless images, DECT-based tissue segmentation outperforms the SECT procedure with a root mean square error (RMS) on relative errors on dose distributions respectively of 2.39% versus 7.77%, and provides DVHs closest to the reference DVHs for all tissues. For a medium level of CT noise, Bayesian eigentissue decomposition still performs better on the overall dose calculation as the RMS error is found to be of 7.83% compared to 9.15% for SECT. Both methods give a similar DVH for the prostate while the DECT segmentation remains more accurate for organs at risk and in presence of calcifications, with less than 5% of RMS errors within the calcifications versus up to 154% for SECT. In a patient-like geometry, DECT-based tissue segmentation provides dose distributions with the highest accuracy and the least bias compared to SECT. When imaging noise is considered, benefits of DECT are noticeable if important calcifications are found within the prostate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juneja, P; Harris, E; Bamber, J
2014-06-01
Purpose: There is substantial observer variability in the delineation of target volumes for post-surgical partial breast radiotherapy because the tumour bed has poor x-ray contrast. This variability may result in substantial variations in planned dose distribution. Ultrasound elastography (USE) has an ability to detect mechanical discontinuities and therefore, the potential to image the scar and distortion in breast tissue architecture. The goal of this study was to compare USE techniques: strain elastography (SE), shear wave elastography (SWE) and acoustic radiation force impulse (ARFI) imaging using phantoms that simulate features of the tumour bed, for the purpose of incorporating USE inmore » breast radiotherapy planning. Methods: Three gelatine-based phantoms (10% w/v) containing: a stiff inclusion (gelatine 16% w/v) with adhered boundaries, a stiff inclusion (gelatine 16% w/v) with mobile boundaries and fluid cavity inclusion (to mimic seroma), were constructed and used to investigate the USE techniques. The accuracy of the elastography techniques was quantified by comparing the imaged inclusion with the modelled ground-truth using the Dice similarity coefficient (DSC). For two regions of interest (ROI), the DSC measures their spatial overlap. Ground-truth ROIs were modelled using geometrical measurements from B-mode images. Results: The phantoms simulating stiff scar tissue with adhered and mobile boundaries and seroma were successfully developed and imaged using SE and SWE. The edges of the stiff inclusions were more clearly visible in SE than in SWE. Subsequently, for all these phantoms the measured DSCs were found to be higher for SE (DSCs: 0.91–0.97) than SWE (DSCs: 0.68–0.79) with an average relative difference of 23%. In the case of seroma phantom, DSC values for SE and SWE were similar. Conclusion: This study presents a first attempt to identify the most suitable elastography technique for use in breast radiotherapy planning. Further analysis will include comparison of ARFI with SE and SWE. This work is supported by the EPSRC Platform Grant, reference number EP/H046526/1.« less
Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N
2018-01-01
Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G; Winey, B
Purpose: An unpredictable movement of a patient can occur during SBRT even when immobilization devices are applied. In the SBRT treatments using a conventional linear accelerator detection of such movements relies heavily on human interaction and monitoring. This study aims to detect such positional abnormalities in real-time by assessing intra-fractional gantry mounted kV projection images of a patient’s spine. Methods: We propose a self-CBCT image based spine tracking method consisting of the following steps: (1)Acquire a pre-treatment CBCT image; (2)Transform the CBCT volume according to the couch correction; (3)Acquire kV projections during treatment beam delivery; (4)Simultaneously with each acquisition generatemore » a DRR from the CBCT volume based-on the current projection geometry; (5)Perform an intensity gradient-based 2D registration between spine ROI images of the projection and the DRR images; (6)Report an alarm if the detected 2D displacement is beyond a threshold value. To demonstrate the feasibility, retrospective simulations were performed on 1,896 projections from nine CBCT sessions of three patients who received lung SBRT. The unpredictable movements were simulated by applying random rotations and translations to the reference CBCT prior to each DRR generation. As the ground truth, the 3D translations and/or rotations causing >3 mm displacement of the midpoint of the thoracic spine were regarded as abnormal. In the measurements, different threshold values of 2D displacement were tested to investigate sensitivity and specificity of the proposed method. Results: A linear relationship between the ground truth 3D displacement and the detected 2D displacement was observed (R{sup 2} = 0.44). When the 2D displacement threshold was set to 3.6 mm the overall sensitivity and specificity were 77.7±5.7% and 77.9±3.5% respectively. Conclusion: In this simulation study, it was demonstrated that intrafractional kV projections from an on-board CBCT system have a potential to detect unpredictable patient movement during SBRT. This research is funded by Interfractional Imaging Research Grant from Elekta.« less
NASA Astrophysics Data System (ADS)
Parker, James; Pryse, Eleri; Jackson-Booth, Natasha
2017-04-01
The main ionospheric trough is a large-scale spatial depletion in the ionospheric electron density that commonly separates the auroral and mid-latitude regions. The feature covers several degrees in latitude and is extended in longitude. It exhibits substantial day-to-day variability in both the location of its minimum ionisation density and in its latitudinal structure. Observations from the UK have shown the trough to be a night-time feature, appearing in early evening to the north of the mainland and progressing equatorward during the course of the night. At dawn, photoionisation fills in the feature. Under increasing levels of geomagnetic activity, the trough moves progressively to lower latitudes. Steep gradients on the trough walls and their variability can cause problems for radio applications. EDAM can be used to model the ionosphere at the trough latitudes by assimilating ionospheric observations from this region into the International Reference Ionosphere (IRI). In this study troughs modelled by EDAM, assimilating data for a period from September to December 2002, are presented and are verified by comparisons with independent observations. Measurements of slant total electron content (sTEC) between GPS satellites and forty ground receivers in Europe were assimilated into EDAM to model the ionospheric electron density. The Vertical Total Electron Content (VTEC) was then calculated through the model, with the values at the longitude of 0.0E considered to obtain statistical characteristics of identified troughs parameters. Comparisons of the parameters with those obtained previously, using transmissions from the satellites of NIMS (Navy Ionospheric Monitoring System) orbiting at altitudes lower than GPS, revealed consistent results. Further support for the EDAM trough was obtained by comparisons of the model with independent GPS measurements. For this a GPS ground station not used in the assimilation was used to observe the sTEC to this "truth" station. Comparisons of these independent truth data with sTEC calculated through the model were used to determine the accuracy of EDAM in the vicinity of the trough.
Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger
NASA Technical Reports Server (NTRS)
Gay, Robert S.; Stochowiak, Susan; Smith, Kelly
2013-01-01
As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.
Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; Liu, Qi; Yin, Guangfu
2018-01-01
Sensor-level functional connectivity topography (sFCT) contributes significantly to our understanding of brain networks. sFCT can be constructed using either electroencephalography (EEG) or magnetoencephalography (MEG). Here, we compared sFCT within the EEG modality and between EEG and MEG modalities. We first used simulations to look at how different EEG references—including the Reference Electrode Standardization Technique (REST), average reference (AR), linked mastoids (LM), and left mastoid references (LR)—affect EEG-based sFCT. The results showed that REST decreased the reference effects on scalp EEG recordings, making REST-based sFCT closer to the ground truth (sFCT based on ideal recordings). For the inter-modality simulation comparisons, we compared each type of EEG-sFCT with MEG-sFCT using three metrics to quantize the differences: Relative Error (RE), Overlap Rate (OR), and Hamming Distance (HD). When two sFCTs are similar, RE and HD are low, while OR is high. Results showed that among all reference schemes, EEG-and MEG-sFCT were most similar when the EEG was REST-based and the EEG and MEG were recorded simultaneously. Next, we analyzed simultaneously recorded MEG and EEG data from publicly available face-recognition experiments using a similar procedure as in the simulations. The results showed (1) if MEG-sFCT is the standard, REST—and LM-based sFCT provided results closer to this standard in the terms of HD; (2) REST-based sFCT and MEG-sFCT had the highest similarity in terms of RE; (3) REST-based sFCT had the most overlapping edges with MEG-sFCT in terms of OR. This study thus provides new insights into the effect of different reference schemes on sFCT and the similarity between MEG and EEG in terms of sFCT. PMID:29867395
Public health applications of remote sensing of vector borne and parasitic diseases
NASA Technical Reports Server (NTRS)
1976-01-01
Results of an investigation of the potential application of remote sensing to various fields of public health are presented. Specific topics discussed include: detection of snail habitats in connection with the epidemiology of schistosomiasis; the detection of certain Anopheles breeding sites, and location of transient human populations, both in connection with malaria eradication programs; and detection of overwintering population sites for the primary screwworm (Cochliomyia americana). Emphasis was placed on the determination of ground truth data on the biological, chemical, and physical characteristics of ground waters which would or would not support the growth of significant populations of mosquitoes.
Evaluation of large area crop estimation techniques using LANDSAT and ground-derived data. [Missouri
NASA Technical Reports Server (NTRS)
Amis, M. L.; Lennington, R. K.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)
1981-01-01
The results of the Domestic Crops and Land Cover Classification and Clustering study on large area crop estimation using LANDSAT and ground truth data are reported. The current crop area estimation approach of the Economics and Statistics Service of the U.S. Department of Agriculture was evaluated in terms of the factors that are likely to influence the bias and variance of the estimator. Also, alternative procedures involving replacements for the clustering algorithm, the classifier, or the regression model used in the original U.S. Department of Agriculture procedures were investigated.
NASA Astrophysics Data System (ADS)
Martín-Torres, Javier; Paz Zorzano, María; Pla-García, Jorge; Rafkin, Scot; Lepinette, Alain; Sebastián, Eduardo; Gómez-Elvira, Javier; REMS Team
2013-04-01
Due to the low density of the Martian atmosphere, the temperature of the surface is controlled primarily by solar heating, and infrared cooling to the atmosphere and space, rather than heat exchange with the atmosphere. In the absence of solar radiation the infrared (IR) cooling, and then the nighttime surface temperatures, are directly controlled by soil termal inertia and atmospheric optical thickness (τ) at infrared wavelengths. Under non-wind conditions, and assuming no processes involving latent heat changes in the surface, for a particular site where the rover stands the main parameter controlling the IR cooling will be τ. The minimal ground temperature values at a fixed position may thus be used to detect local variations in the total dust/aerosols/cloud tickness. The Ground Temperature Sensor (GTS) and Air Temperature Sensor (ATS) in the Rover Environmental Monitoring Station (REMS) on board the Mars Science Laboratory (MSL) Curiosity rover provides hourly ground and air temperature measurements respectively. During the first 100 sols of operation of the rover, within the area of low thermal inertia, the minimal nightime ground temperatures reached values between 180 K and 190 K. For this season the expected frost point temperature is 200 K. Variations of up to 10 K have been observed associated with dust loading at Gale at the onset of the dust season. We will use these measurements together with line-by-line radiative transfer simulations using the Full Transfer By Optimized LINe-by-line (FUTBOLIN) code [Martín-Torres and Mlynczak, 2005] to estimate the IR atmospheric opacity and then dust/cloud coverage over the rover during the course of the MSL mission. Monitoring the dust loading and IR nightime cooling evolution during the dust season will allow for a better understanding of the influence of the atmosphere on the ground temperature and provide ground truth to models and orbiter measurements. References Martín-Torres, F. J. and M. G. Mlynczak, Application of FUTBOLIN (FUll Transfer By Ordinary Line-by-Line) to the analysis of the solar system and extrasolar planetary atmospheres, Bulletin of the American Astronomical Society, Vol. 37, p.1566, 2005
Global Mapping of Cyber Attacks
2014-01-01
from any 10 particular vendor is wrong. Moreover, researchers often use anti-virus labels as ground-truth for evaluating new approaches [5, 17, 36...lnstrusions and Defenses (RAID), September 2007. [SJ U. Bayer, P. M. Comparetti, C. Hlauschek, C. Kruegel, and E. Kirda. Scalable, behavior -based...International Development and Confiict Management International crisis behavior project http: I /www. cidcm. umd.edu/icb/. Last accessed: December 2011
Baseline data on the oceanography of Cook Inlet, Alaska
NASA Technical Reports Server (NTRS)
Gatto, L. W.
1975-01-01
Regional relationships between river hydrology, sediment transport, circulation and coastal processes were analyzed utilizing aircraft, ERTS-1 and N.O.A.A. -2 and -3 imagery and corroborative ground truth data. The use of satellite and aircraft imagery provides a means of acquiring synoptic information for analyzing the dynamic processes of Cook Inlet in a fashion not previously possible.
2008-08-01
identified for static experiments , target arrays have been designed and ground truth systems are already in place. Participation in field ...key objectives are rapid launch and on-orbit checkout, theater commanding, and near -real time theater data integration. It will also feature a rapid...Organisation (DSTO) plan to participate in TacSat-3 experiments . 1. INTRODUCTION In future conflicts, military space forces will likely face
Oceanographic features in the lee of the windward and leeward islands: ERTS and ship data
NASA Technical Reports Server (NTRS)
Hanson, K. J.; Hebard, F.; Cram, R.
1973-01-01
Analysis of the ERTS data in portions of the eastern Caribbean are presented for October 1972 showing features which are, as yet, not explained. Ground truth data obtained in that area during November 1972 are presented. These include vertical temperature structure in the mixed layer and thermocline, and surface measurements of salinity, temperature, and chlorophyll.
ERIC Educational Resources Information Center
Trifonas, Peter
2003-01-01
The principle of reason "as principle of grounding, foundation or institution" has tended to guide the science of research toward techno-practical ends. From this epistemic superintendence of the terms of knowledge and inquiry, there has arisen the traditional notion of academic responsibility that is tied to the pursuit of truth via a conception…
2006-10-05
the likely existence of a small foreshock . 2. BACKGROUND 2.1. InSAR The most well-known examples of InSAR used as a geodetic tool involve...the event. We have used the seismic waveforms in the Sultan Dag event to identify a small foreshock preceding the main shock by about 3 seconds
Inter-Vehicular Ad Hoc Networks: From the Ground Truth to Algorithm Design and Testbed Architecture
ERIC Educational Resources Information Center
Giordano, Eugenio
2011-01-01
Many of the devices we interact with on a daily basis are currently equipped with wireless connectivity. Soon this will be extended to the vehicles we drive/ride every day. Wirelessly connected vehicles will form a new kind of network that will enable a wide set of innovative applications ranging from enhanced safety to entertainment. To…
NASA Technical Reports Server (NTRS)
Mourad, A. G.; Gopalapillai, S.; Kuhner, M.
1975-01-01
The Skylab Altimeter Experiment has proven the capability of the altimeter for measurement of sea surface topography. The geometric determination of the geoid/mean sea level from satellite altimetry is a new approach having significant applications in many disciplines including geodesy and oceanography. A Generalized Least Squares Collocation Technique was developed for determination of the geoid from altimetry data. The technique solves for the altimetry geoid and determines one bias term for the combined effect of sea state, orbit, tides, geoid, and instrument error using sparse ground truth data. The influence of errors in orbit and a priori geoid values are discussed. Although the Skylab altimeter instrument accuracy is about + or - 1 m, significant results were obtained in identification of large geoidal features such as over the Puerto Rico trench. Comparison of the results of several passes shows that good agreement exists between the general slopes of the altimeter geoid and the ground truth, and that the altimeter appears to be capable of providing more details than are now available with best known geoids. The altimetry geoidal profiles show excellent correlations with bathymetry and gravity. Potential applications of altimetry results to geodesy, oceanography, and geophysics are discussed.
Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H
2017-01-01
The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898
A study to explore the use of orbital remote sensing to determine native arid plant distribution
NASA Technical Reports Server (NTRS)
Mcginnies, W. G. (Principal Investigator); Haase, E. F.; Musick, H. B. (Compiler)
1973-01-01
The author has identified the following significant results. A theory has been developed of a method for determining the reflectivities of natural areas from ERTS-1 data. This method requires the following measurements: (1) ground truth reflectivity data from two different calibration areas; (2) radiance data from ERTS-1 MSS imagery for the same two calibration areas; and (3) radiance data from ERTS-1 MSS imagery for the area(s) in which reflectivity is to be determined. The method takes into account sun angle effects and atmospheric effects on the radiance seen by the space sensor. If certain assumptions are made, the ground truth data collection need not be simultaneous with the ERTS-1 overflight. The method allows the calculation of a conversion factor for converting ERTS-1 MSS radiance measurements of a given overflight to reflectivity values. This conversion factor can be used to determine the reflectivity of any area in the general vicinity of the calibration areas which has a relatively similar overlying atmosphere. This method, or some modification of it, may be useful in ERTS investigations which require the determination of spectral signatures of areas from spacecraft data.
Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution
NASA Astrophysics Data System (ADS)
Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.
2017-12-01
We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
2004 Savannah River Cooling Tower Collection (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, Alfred; Parker, Matthew J.; Villa-Aleman, E.
2005-05-01
The Savannah River National Laboratory (SRNL) collected ground truth in and around the Savannah River Site (SRS) F-Area cooling tower during the spring and summer of 2004. The ground truth data consisted of air temperatures and humidity inside and around the cooling tower, wind speed and direction, cooling water temperatures entering; inside adn leaving the cooling tower, cooling tower fan exhaust velocities and thermal images taken from helicopters. The F-Area cooling tower had six cells, some of which were operated with fans off during long periods of the collection. The operating status (fan on or off) for each of themore » six cells was derived from operations logbooks and added to the collection database. SRNL collected the F-Area cooling tower data to produce a database suitable for validation of a cooling tower model used by one of SRNL's customer agencies. SRNL considers the data to be accurate enough for use in a model validation effort. Also, the thermal images of the cooling tower decks and throats combined with the temperature measurements inside the tower provide valuable information about the appearance of cooling towers as a function of fan operating status and time of day.« less
Creating experimental color harmony map
NASA Astrophysics Data System (ADS)
Chamaret, Christel; Urban, Fabrice; Lepinel, Josselin
2014-02-01
Starting in the 17th century with Newton, color harmony is a topic that did not reach a consensus on definition, representation or modeling so far. Previous work highlighted specific characteristics for color harmony on com- bination of color doublets or triplets by means of a human rating on a harmony scale. However, there were no investigation involving complex stimuli or pointing out how harmony is spatially located within a picture. The modeling of such concept as well as a reliable ground-truth would be of high value for the community, since the applications are wide and concern several communities: from psychology to computer graphics. We propose a protocol for creating color harmony maps from a controlled experiment. Through an eye-tracking protocol, we focus on the identification of disharmonious colors in pictures. The experiment was composed of a free viewing pass in order to let the observer be familiar with the content before a second pass where we asked "to search for the most disharmonious areas in the picture". Twenty-seven observers participated to the experiments that was composed of a total of 30 different stimuli. The high inter-observer agreement as well as a cross-validation confirm the validity of the proposed ground-truth.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
2011-01-01
Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/. PMID:21668958
On Sea Ice Characterisation By Multi-Frequency SAR
NASA Astrophysics Data System (ADS)
Grahn, Jakob; Brekke, Camilla; Eltoft, Torbjorn; Holt, Benjamin
2013-12-01
By means of polarimetric target decomposition, quad-pol SAR data of sea ice is analysed at two frequency bands. In particular, the non negative eigenvalue decomposition (NNED) is applied on L- and C-band NASA/JPL AIR- SAR data acquired over the Beaufort sea in 2004. The de- composition separates the scattered radar signal into three types, dominated by double, volume and single bounce scattering respectively. Using ground truth derived from RADARSAT-1 and meteorological data, we investigate how the different frequency bands compare in terms of these scattering types. The ground truth contains multi year ice and three types of first year ice of different age and thickness. We find that C-band yields a higher scattered intensity in most ice and scattering types, as well as a more homogeneous intensity. L-band on the other hand yields more pronounced deformation features, such as ridges. The mean intensity contrast between the two thinnest ice types is highest in the double scattering component of C- band, although the contrast of the total signal is greater in L-band. This may indicate that the choice of polarimetric parameters is important for discriminating thin ice types.
NASA Technical Reports Server (NTRS)
Colliander, Andreas; Chan, Steven; Yueh, Simon; Cosh, Michael; Bindlish, Rajat; Jackson, Tom; Njoku, Eni
2010-01-01
Field experiment data sets that include coincident remote sensing measurements and in situ sampling will be valuable in the development and validation of the soil moisture algorithms of the NASA's future SMAP (Soil Moisture Active and Passive) mission. This paper presents an overview of the field experiment data collected from SGP99, SMEX02, CLASIC and SMAPVEX08 campaigns. Common in these campaigns were observations of the airborne PALS (Passive and Active L- and S-band) instrument, which was developed to acquire radar and radiometer measurements at low frequencies. The combined set of the PALS measurements and ground truth obtained from all these campaigns was under study. The investigation shows that the data set contains a range of soil moisture values collected under a limited number of conditions. The quality of both PALS and ground truth data meets the needs of the SMAP algorithm development and validation. The data set has already made significant impact on the science behind SMAP mission. The areas where complementing of the data would be most beneficial are also discussed.
A rapid method to characterize seabed habitats and associated macro-organisms
Anderson, T.J.; Cochrane, G.R.; Roberts, D.A.; Chezar, H.; Hatcher, G.; ,
2007-01-01
This study presents a method for rapidly collecting, processing, and interrogating real-time abiotic and biotic seabed data to determine seabed habitat classifications. This is done from data collected over a large area of an acoustically derived seabed map, along multidirectional transects, using a towed small camera-sled. The seabed, within the newly designated Point Harris Marine Reserve on the northern coast of San Miguel Island, California, was acoustically imaged using sidescan sonar then ground-truthed using a towed small camera-sled. Seabed characterizations were made from video observations, and were logged to a laptop computer (PC) in real time. To ground-truth the acoustic mosaic, and to characterize abiotic and biotic aspects of the seabed, a three-tiered characterization scheme was employed that described the substratum type, physical structure (i.e., bedform or vertical relief), and the occurrence of benthic macrofauna and flora. A crucial advantage of the method described here, is that preliminary seabed characterizations can be interrogated and mapped over the sidescan mosaic and other seabed information within hours of data collection. This ability to rapidly process seabed data is invaluable to scientists and managers, particularly in modifying concurrent or planning subsequent surveys.
Dynamic data integration and stochastic inversion of a confined aquifer
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.
2013-12-01
Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.
NASA Astrophysics Data System (ADS)
Misra, Gourav; Buras, Allan; Asam, Sarah; Menzel, Annette
2017-04-01
Past work in remote sensing of land surface phenology have mapped vegetation cycles at multiple scales. Much has been discussed and debated about the uncertainties associated with the selection of data, data processing and the eventual conclusions drawn. Several studies do however provide evidence of strong links between different land surface phenology (LSP) metrics with specific ground phenology (GP) (Fisher and Mustard, 2007; Misra et al., 2016). Most importantly the use of high temporal and spatial resolution remote sensing data and ground truth information is critical for such studies. In this study, we use a higher temporal resolution 4 day MODIS NDVI product developed by EURAC (Asam et al., in prep) for the Bavarian Forest National Park during 2002-2015 period and extract various phenological metrics covering different phenophases of vegetation (start of season / sos and end of season / eos). We found the LSP-sos to be more strongly linked to the elevation of the area than LSP-eos which has been cited to be harder to detect (Stöckli et al., 2008). The LSP metrics were also correlated to GP information at 4 different stations covering elevations ranging from approx. 500 to 1500 metres. Results show that among the five dominant species in the area i.e. European ash, Norway spruce, European beech, Norway maple and orchard grass, only particular GP observations for some species show stronger correlations with LSP than others. Spatial variations in the LSP-GP correlations were also observed, with certain areas of the National Park showing positive correlations and others negative. An analysis of temporal trends of LSP also indicates the possibility to detect those areas in the National Park that were affected by extreme events. Further investigations are planned to explain the heterogeneity in the derived LSP metrics using high resolution ground truth data and multivariate statistical analyses. Acknowledgement: This research received funding from the Bavarian State Ministry of the Environment and Consumer Protection. References: 1. Fisher, J.I.; Mustard, J.F. Cross-scalar satellite phenology from ground, Landsat, and MODIS data. Remote Sens. Environ. 2007, 109, 261-273. 2. Misra, G.; Buras, A.; Menzel, A. Effects of Different Methods on the Comparison be-tween Land Surface and Ground Phenology—A Methodological Case Study from South-Western Germany. Remote Sens. 2016, 8, 753. 3. Asam, S.; Callegari, M.; Fiore, G.; Matiu, M.; De Gregorio, L.; Jacob, A.; Staab, J.; Men-zel, A.; Notarnicola, C. Analysis of spatiotemporal variations of climate, snow cover and plant phenology over the Alps. 2017 (in preparation). 4. Stöckli, R.; Rutishauser, T.; Dragoni, D.; O'Keefe, J.; Thornton, P. E.; Jolly, M.; Lu, L.; Denning, A. S. Remote sensing data assimilation for a prognostic phenology model. Journal of Geophysical Research: Biogeosciences. 2008, 113.
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Hernández, L.; Messina, P.; Dearaujo, J.; Li, A.; Hicks, A.; White, L.
2008-12-01
Natural gamma radiation measurements were collected with a hand-held Geiger counter at nearly 400 locations on two general transects across the southwestern United States. The data are used to provide ground-truth comparison to published airborne radiation surveys of the region. The first transect was collected by high school students in the SF-ROCKS program at San Francisco State University in the summer of 2008 starting in San Francisco. Data were collected across the Sierra Nevada Range on I-80, and across Highway 50 in Nevada, and I-70 in Utah. Data were collected in and around Great Basin, Arches, Capitol Reef, Bryce, and Zion National Parks, and Grand Staircase-Escalante National Monument. A second transect extends from San José, California to Flagstaff, Arizona and includes the Mojave National Reserve, Death Valley region, and locations throughout the Navajo Reservation region in northern Arizona and western New Mexico. Radiation data (with GPS reference) were collected from all the major sedimentary rock formations and igneous rocks of the Colorado Plateau and from many igneous and metamorphic rocks throughout the Great Basin and southern California deserts. Anomalously high localized levels were noted in selected sedimentary units associated with uranium exploration targets in the Colorado Plateau region, and in caverns and rock fissures where radon gas (and accumulation of derivative fission products) are the inferred sources.
NASA Astrophysics Data System (ADS)
Lines, A.; Elliott, J.; Ray, L.; Albert, M. R.
2017-12-01
Understanding the surface mass balance (SMB) of the Greenland ice sheet is critical to evaluating its response to a changing climate. A key factor in translating satellite and airborne elevation measurements of the ice sheet to SMB is understanding natural variability of firn layer depth and the relative compaction rate of these layers. A site near Summit Station, Greenland was chosen to investigate the variation in layering across a 100m by 100m grid using a 900 MHz and a 2.6 GHz ground penetrating radar (GPR) antenna. These radargrams were ground truthed by taking depth density profiles of five 2m snow pits and five 5m firn cores within the 100m by 100m grid. Combining these measurements with the accumulation data from the nearby ICECAPS weekly bamboo forest measurements, it's possible to see how the snow deposition from individual storm events can vary over a small area. Five metal reflectors were also placed on the surface of the snow in the bounds of the grid to serve as reference reflectors for similar measurements that will be taken in the 2018 field season at Summit Station. This will assist in understanding how one year of accumulation in the dry snow zone impacts compaction and how this rate can vary over a small area.
Effects of temporal variability in ground data collection on classification accuracy
Hoch, G.A.; Cully, J.F.
1999-01-01
This research tested whether the timing of ground data collection can significantly impact the accuracy of land cover classification. Ft. Riley Military Reservation, Kansas, USA was used to test this hypothesis. The U.S. Army's Land Condition Trend Analysis (LCTA) data annually collected at military bases was used to ground truth disturbance patterns. Ground data collected over an entire growing season and data collected one year after the imagery had a kappa statistic of 0.33. When using ground data from only within two weeks of image acquisition the kappa statistic improved to 0.55. Potential sources of this discrepancy are identified. These data demonstrate that there can be significant amounts of land cover change within a narrow time window on military reservations. To accurately conduct land cover classification at military reservations, ground data need to be collected in as narrow a window of time as possible and be closely synchronized with the date of the satellite imagery.
Images Are Not the (Only) Truth: Brain Mapping, Visual Knowledge, and Iconoclasm.
ERIC Educational Resources Information Center
Beaulieu, Anne
2002-01-01
Debates the paradoxical nature of claims about the emerging contributions of functional brain mapping. Examines the various ways that images are deployed and rejected and highlights an approach that provides insight into the current demarcation of imaging. (Contains 68 references.) (DDR)
Langford, Zachary; Kumar, Jitendra; Hoffman, Forrest; ...
2016-09-06
Multi-scale modeling of Arctic tundra vegetation requires characterization of the heterogeneous tundra landscape, which includes representation of distinct plant functional types (PFTs). We combined high-resolution multi-spectral remote sensing imagery from the WorldView-2 satellite with light detecting and ranging (LiDAR)-derived digital elevation models (DEM) to characterize the tundra landscape in and around the Barrow Environmental Observatory (BEO), a 3021-hectare research reserve located at the northern edge of the Alaskan Arctic Coastal Plain. Vegetation surveys were conducted during the growing season (June August) of 2012 from 48 1 m 1 m plots in the study region for estimating the percent cover ofmore » PFTs (i.e., sedges, grasses, forbs, shrubs, lichens and mosses). Statistical relationships were developed between spectral and topographic remote sensing characteristics and PFT fractions at the vegetation plots from field surveys. These derived relationships were employed to statistically upscale PFT fractions for our study region of 586 hectares at 0.25-m resolution around the sampling areas within the BEO, which was bounded by the LiDAR footprint. We employed an unsupervised clustering for stratification of this polygonal tundra landscape and used the clusters for segregating the field data for our upscaling algorithm over our study region, which was an inverse distance weighted (IDW) interpolation. We describe two versions of PFT distribution maps upscaled by IDW from WorldView-2 imagery and LiDAR: (1) a version computed from a single image in the middle of the growing season; and (2) a version computed from multiple images through the growing season. This approach allowed us to quantify the value of phenology for improving PFT distribution estimates. We also evaluated the representativeness of the field surveys by measuring the Euclidean distance between every pixel. This guided the ground-truthing campaign in late July of 2014 for addressing uncertainty based on representativeness analysis by selecting 24 1 m 1 m plots that were well and poorly represented. Ground-truthing indicated that including phenology had a better accuracy (R 2=0.75, RMSE=9.94) than the single image upscaling (R 2=0.63 , RMSE=12.05) predicted from IDW. We also updated our upscaling approach to include the 24 ground-truthing plots, and a second ground-truthing campaign in late August of 2014 indicated a better accuracy for the phenology model (R 2=0.61 , RMSE=13.78 ) than only using the original 48 plots for the phenology model (R 2=0.23 , RMSE=17.49). After all, we believe that the cluster-based IDW upscaling approach and the representativeness analysis offer new insights for upscaling high-resolution data in fragmented landscapes. This analysis and approach provides PFT maps needed to inform land surface models in Arctic ecosystems.« less
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
Remote sensing for grassland management in the arid Southwest
Marsett, R.C.; Qi, J.; Heilman, P.; Biedenbender, S.H.; Watson, M.C.; Amer, S.; Weltz, M.; Goodrich, D.; Marsett, R.
2006-01-01
We surveyed a group of rangeland managers in the Southwest about vegetation monitoring needs on grassland. Based on their responses, the objective of the RANGES (Rangeland Analysis Utilizing Geospatial Information Science) project was defined to be the accurate conversion of remotely sensed data (satellite imagery) to quantitative estimates of total (green and senescent) standing cover and biomass on grasslands and semidesert grasslands. Although remote sensing has been used to estimate green vegetation cover, in arid grasslands herbaceous vegetation is senescent much of the year and is not detected by current remote sensing techniques. We developed a ground truth protocol compatible with both range management requirements and Landsat's 30 m resolution imagery. The resulting ground-truth data were then used to develop image processing algorithms that quantified total herbaceous vegetation cover, height, and biomass. Cover was calculated based on a newly developed Soil Adjusted Total Vegetation Index (SATVI), and height and biomass were estimated based on reflectance in the near infrared (NIR) band. Comparison of the remotely sensed estimates with independent ground measurements produced r2 values of 0.80, 0.85, and 0.77 and Nash Sutcliffe values of 0.78, 0.70, and 0.77 for the cover, plant height, and biomass, respectively. The approach for estimating plant height and biomass did not work for sites where forbs comprised more than 30% of total vegetative cover. The ground reconnaissance protocol and image processing techniques together offer land managers accurate and timely methods for monitoring extensive grasslands. The time-consuming requirement to collect concurrent data in the field for each image implies a need to share the high fixed costs of processing an image across multiple users to reduce the costs for individual rangeland managers.
The simulators: truth and power in the psychiatry of José Ingenieros.
Caponi, Sandra
2016-01-01
Using Michel Foucault's lectures on "Psychiatric power" as its starting point, this article analyzes the book Simulación de la locura (The simulation of madness), published in 1903 by the Argentine psychiatrist José Ingenieros. Foucault argues that the problem of simulation permeates the entire history of modern psychiatry. After initial analysis of José Ingenieros's references to the question of simulation in the struggle for existence, the issue of simulation in pathological states in general is examined, and lastly the simulation of madness and the problem of degeneration. Ingenieros participates in the epistemological and political struggle that took place between experts-psychiatrists and simulators over the question of truth.
Robinson, Marci M.; Dowsett, Harry J.; Stoll, Danielle K.
2018-01-30
Despite the wealth of global paleoclimate data available for the warm period in the middle of the Piacenzian Stage of the Pliocene Epoch (about 3.3 to 3.0 million years ago [Ma]; Dowsett and others, 2013, and references therein), the Indian Ocean has remained a region of sparse geographic coverage in terms of microfossil analysis. In an effort to characterize the surface Indian Ocean during this interval, we examined the planktic foraminifera from Ocean Drilling Program (ODP) sites 709, 716, 722, 754, 757, 758, and 763, encompassing a wide range of oceanographic conditions. We quantitatively analyzed the data for sea surface temperature (SST) estimation using both the modern analog technique (MAT) and a factor analytic transfer function. The data will contribute to the U.S. Geological Survey (USGS) Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project’s global SST reconstruction and climate model SST boundary condition for the mid-Piacenzian and will become part of the PRISM verification dataset designed to ground-truth Pliocene climate model simulations (Dowsett and others, 2013).
Semiautomated Segmentation of Polycystic Kidneys in T2-Weighted MR Images.
Kline, Timothy L; Edwards, Marie E; Korfiatis, Panagiotis; Akkus, Zeynettin; Torres, Vicente E; Erickson, Bradley J
2016-09-01
The objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing. We developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity. The MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%). The MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM.
Lagüela, Susana; Dorado, Iago; Gesto, Manuel; Arias, Pedro; González-Aguilera, Diego; Lorenzo, Henrique
2018-03-02
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus 3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm.
Satellite-based monitoring of cotton evapotranspiration
NASA Astrophysics Data System (ADS)
Dalezios, Nicolas; Dercas, Nicholas; Tarquis, Ana Maria
2016-04-01
Water for agricultural use represents the largest share among all water uses. Vulnerability in agriculture is influenced, among others, by extended periods of water shortage in regions exposed to droughts. Advanced technological approaches and methodologies, including remote sensing, are increasingly incorporated for the assessment of irrigation water requirements. In this paper, remote sensing techniques are integrated for the estimation and monitoring of crop evapotranspiration ETc. The study area is Thessaly central Greece, which is a drought-prone agricultural region. Cotton fields in a small agricultural sub-catchment in Thessaly are used as an experimental site. Daily meteorological data and weekly field data are recorded throughout seven (2004-2010) growing seasons for the computation of reference evapotranspiration ETo, crop coefficient Kc and cotton crop ETc based on conventional data. Satellite data (Landsat TM) for the corresponding period are processed to estimate cotton crop coefficient Kc and cotton crop ETc and delineate its spatiotemporal variability. The methodology is applied for monitoring Kc and ETc during the growing season in the selected sub-catchment. Several error statistics are used showing very good agreement with ground-truth observations.
Action change detection in video using a bilateral spatial-temporal constraint
NASA Astrophysics Data System (ADS)
Tian, Jing; Chen, Li
2016-08-01
Action change detection in video aims to detect action discontinuity in video. The silhouettes-based features are desirable for action change detection. This paper studies the problem of silhouette-quality assessment. For that, a non-reference approach without the need for ground truth is proposed in this paper to evaluate the quality of silhouettes, by exploiting both the boundary contrast of the silhouettes in the spatial domain and the consistency of the silhouettes in the temporal domain. This is in contrast to that either only spatial information or only temporal information of silhouettes is exploited in conventional approaches. Experiments are conducted using artificially generated degraded silhouettes to show that the proposed approach outperforms conventional approaches to achieve more accurate quality assessment. Furthermore, experiments are performed to show that the proposed approach is able to improve the accuracy performance of conventional action change approaches in two human action video data-sets. The average runtime of the proposed approach for Weizmann action video data-set is 0.08 second for one frame using Matlab programming language. It is computationally efficient and potential to real-time implementations.
U.S. Army War College Key Strategic Issues List
2007-07-01
modeled outcome to the post-event ground truth . Considering the elements of PMESII (political, military, economic, social, information and...to the soldier? 6. Leadership, Personnel Management, and Culture: a. What is the future of telecommuting in the Army and its implications on...and Intelligence Reform Act requirements. 6. Underground Facilities as a National Security Challenge: a. The construction and employment of Hard and
The U.S. Environmental Protection Agency (EPA) is charged by Congress to protect the nation’s natural resources. Under the mandate of national environmental laws, the EPA strives to formulate and implement actions leading to a compatible balance between human activities and the a...
Terrain Categorization using LIDAR and Multi-Spectral Data
2007-01-01
the same spatial resolution cell will be distinguished. 3. PROCESSING The LIDAR data set used in this study was from a discrete-return...smoothing in the spatial dimension. While it was possible to distinguish different classes of materials using this technique, the spatial resolution was...alone and a combination of the two data-types. Results are compared to significant ground truth information. Keywords: LIDAR, multi- spectral
Christopher Daly; Melissa E. Slater; Joshua A. Roberti; Stephanie H. Laseter; Lloyd W. Swift
2017-01-01
A 69-station, densely spaced rain gauge network was maintained over the period 1951â1958 in the Coweeta Hydrologic Laboratory, located in the southern Appalachians in western North Carolina, USA. This unique dataset was used to develop the first digital seasonal and annual precipitation maps for the Coweeta basin, using elevation regression functions and...
NASA Technical Reports Server (NTRS)
Jones, E. B.; Olt, S. E.
1975-01-01
A compilation of soils information obtained as the result of a library search of data on the Lafayette, Indiana, site and St. Charles, Missouri, site is presented. Soils data for the Lafayette, Indiana, site are shown in Plates 1 and 2; and soils data for the St. Charles, Missouri, site are shown in Plates 3 and 4.
James Legilisho-Kiyiapi
2000-01-01
Through combined use of satellite imagery, aerial photographs, and ground truthing, a multilevel assessment was conducted in a forest block that forms a unique dispersal zone to the Maasai Mara National Reserve ecosystem. Results of the survey revealed considerable ecological diversity on an area-scale basis - in terms of ecotypes. Forest types ranged from Afro-montane...
Ground Truth Events with Source Geometry in Eurasia and the Middle East
2016-06-02
source properties, including seismic moment, corner frequency, radiated energy , and stress drop have been obtained using spectra for S waves following...PARAMETERS Other source parameters, including radiated energy , corner frequency, seismic moment, and static stress drop were calculated using a spectral...technique (Richardson & Jordan, 2002; Andrews, 1986). The process entails separating event and station spectra and median- stacking each event’s
Searching for the Essence of Red Teaming: Linearity Overcoming Rationality Toward Sensemaking
2010-05-21
philosopher Descartes perhaps most heavily influenced the scientific revolution after the Enlightenment. Analysts expended considerable research and energy...presented to my mind so clearly and distinctly as to exclude all ground of doubt. Descartes was the father of the scientific method and attempted...15 Rene Descartes , Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences
Searching for the Essence of Red Teaming: Linearity, Overcoming Rationality, Toward Sensemaking
2010-05-01
philosopher Descartes perhaps most heavily influenced the scientific revolution after the Enlightenment. Analysts expended considerable research and energy...presented to my mind so clearly and distinctly as to exclude all ground of doubt. Descartes was the father of the scientific method and attempted...15 Rene Descartes , Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences
Collection and Analysis of Ground Truth Infrasound Data in Kazakhstan and Russia
2006-05-01
Infrasound signals generated by large mining explosions at Ekibastuz coal mines in Northern Kazakstan have been detected by a 4-element infrasound array ...380 km) and Kokchetav (distance=74 km). Detection of infrasound signals at these distance ranges at mid-latitude (50 degrees N), suggests the... infrasound array , contour plot of beam power and array beam trace .............................. 9 5 Infrasound signals from the