Ground truth management system to support multispectral scanner /MSS/ digital analysis
NASA Technical Reports Server (NTRS)
Coiner, J. C.; Ungar, S. G.
1977-01-01
A computerized geographic information system for management of ground truth has been designed and implemented to relate MSS classification results to in situ observations. The ground truth system transforms, generalizes and rectifies ground observations to conform to the pixel size and shape of high resolution MSS aircraft data. These observations can then be aggregated for comparison to lower resolution sensor data. Construction of a digital ground truth array allows direct pixel by pixel comparison between classification results of MSS data and ground truth. By making comparisons, analysts can identify spatial distribution of error within the MSS data as well as usual figures of merit for the classifications. Use of the ground truth system permits investigators to compare a variety of environmental or anthropogenic data, such as soil color or tillage patterns, with classification results and allows direct inclusion of such data into classification operations. To illustrate the system, examples from classification of simulated Thematic Mapper data for agricultural test sites in North Dakota and Kansas are provided.
NASA Astrophysics Data System (ADS)
Bonaccorsi, R.; Stoker, C. R.; Marte Project Science Team
2007-03-01
The Mars Analog Rio Tinto Experiment (MARTE) performed a simulation of a Mars drilling experiment at the Rio Tinto (Spain). Ground-truth and contamination issues during the distribution of bulk organics and their CN isotopic composition in hematite and go
Caspi, Caitlin Eicher; Friebur, Robin
2016-03-17
A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Cai, J
Purpose: To develop a technique to generate on-board VC-MRI using patient prior 4D-MRI, motion modeling and on-board 2D-cine MRI for real-time 3D target verification of liver and lung radiotherapy. Methods: The end-expiration phase images of a 4D-MRI acquired during patient simulation are used as patient prior images. Principal component analysis (PCA) is used to extract 3 major respiratory deformation patterns from the Deformation Field Maps (DFMs) generated between end-expiration phase and all other phases. On-board 2D-cine MRI images are acquired in the axial view. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI atmore » the end-expiration phase. The DFM is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by matching the corresponding 2D slice of the estimated VC-MRI with the acquired single 2D-cine MRI. The method was evaluated using both XCAT (a computerized patient model) simulation of lung cancer patients and MRI data from a real liver cancer patient. The 3D-MRI at every phase except end-expiration phase was used to simulate the ground-truth on-board VC-MRI at different instances, and the center-tumor slice was selected to simulate the on-board 2D-cine images. Results: Image subtraction of ground truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground truth with prior image. Excellent agreement between profiles was achieved. The normalized cross correlation coefficients between the estimated and ground-truth in the axial, coronal and sagittal views for each time step were >= 0.982, 0.905, 0.961 for XCAT data and >= 0.998, 0.911, 0.9541 for patient data. For XCAT data, the maximum-Volume-Percent-Difference between ground-truth and estimated tumor volumes was 1.6% and the maximum-Center-of-Mass-Shift was 0.9 mm. Conclusion: Preliminary studies demonstrated the feasibility to estimate real-time VC-MRI for on-board target localization before or during radiotherapy treatments. National Institutes of Health Grant No. R01-CA184173; Varian Medical System.« less
Yu, Qingbao; Du, Yuhui; Chen, Jiayu; He, Hao; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D
2017-11-01
A key challenge in building a brain graph using fMRI data is how to define the nodes. Spatial brain components estimated by independent components analysis (ICA) and regions of interest (ROIs) determined by brain atlas are two popular methods to define nodes in brain graphs. It is difficult to evaluate which method is better in real fMRI data. Here we perform a simulation study and evaluate the accuracies of a few graph metrics in graphs with nodes of ICA components, ROIs, or modified ROIs in four simulation scenarios. Graph measures with ICA nodes are more accurate than graphs with ROI nodes in all cases. Graph measures with modified ROI nodes are modulated by artifacts. The correlations of graph metrics across subjects between graphs with ICA nodes and ground truth are higher than the correlations between graphs with ROI nodes and ground truth in scenarios with large overlapped spatial sources. Moreover, moving the location of ROIs would largely decrease the correlations in all scenarios. Evaluating graphs with different nodes is promising in simulated data rather than real data because different scenarios can be simulated and measures of different graphs can be compared with a known ground truth. Since ROIs defined using brain atlas may not correspond well to real functional boundaries, overall findings of this work suggest that it is more appropriate to define nodes using data-driven ICA than ROI approaches in real fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Yang, Y; Young, L
Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesionsmore » featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus advised that use of PET texture parameters for predictive and prognostic measurements in oncologic setting awaits further systematic and critical evaluation.« less
The challenge of mapping the human connectome based on diffusion tractography.
Maier-Hein, Klaus H; Neher, Peter F; Houde, Jean-Christophe; Côté, Marc-Alexandre; Garyfallidis, Eleftherios; Zhong, Jidan; Chamberland, Maxime; Yeh, Fang-Cheng; Lin, Ying-Chia; Ji, Qing; Reddick, Wilburn E; Glass, John O; Chen, David Qixiang; Feng, Yuanjing; Gao, Chengfeng; Wu, Ye; Ma, Jieyan; Renjie, H; Li, Qiang; Westin, Carl-Fredrik; Deslauriers-Gauthier, Samuel; González, J Omar Ocegueda; Paquette, Michael; St-Jean, Samuel; Girard, Gabriel; Rheault, François; Sidhu, Jasmeen; Tax, Chantal M W; Guo, Fenghua; Mesri, Hamed Y; Dávid, Szabolcs; Froeling, Martijn; Heemskerk, Anneriet M; Leemans, Alexander; Boré, Arnaud; Pinsard, Basile; Bedetti, Christophe; Desrosiers, Matthieu; Brambati, Simona; Doyon, Julien; Sarica, Alessia; Vasta, Roberta; Cerasa, Antonio; Quattrone, Aldo; Yeatman, Jason; Khan, Ali R; Hodges, Wes; Alexander, Simon; Romascano, David; Barakovic, Muhamed; Auría, Anna; Esteban, Oscar; Lemkaddem, Alia; Thiran, Jean-Philippe; Cetingul, H Ertan; Odry, Benjamin L; Mailhe, Boris; Nadar, Mariappan S; Pizzagalli, Fabrizio; Prasad, Gautam; Villalon-Reina, Julio E; Galvis, Justin; Thompson, Paul M; Requejo, Francisco De Santiago; Laguna, Pedro Luque; Lacerda, Luis Miguel; Barrett, Rachel; Dell'Acqua, Flavio; Catani, Marco; Petit, Laurent; Caruyer, Emmanuel; Daducci, Alessandro; Dyrby, Tim B; Holland-Letz, Tim; Hilgetag, Claus C; Stieltjes, Bram; Descoteaux, Maxime
2017-11-07
Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.
NASA Technical Reports Server (NTRS)
Jones, E. B.
1983-01-01
As remote sensing increasingly becomes more of an operational tool in the field of snow management and snow hydrology, there is need for some degree of standardization of ""snowpack ground truth'' techniques. This manual provides a first step in standardizing these procedures and was prepared to meet the needs of remote sensing researchers in planning missions requiring ground truth as well as those providing the ground truth. Focus is on ground truth for remote sensors primarily operating in the microwave portion of the electromagnetic spectrum; nevertheless, the manual should be of value to other types of sensor programs. This first edition of ground truth procedures must be updated as new or modified techniques are developed.
Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas
2017-03-18
Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.
Modeling and Simulation in Support of Testing and Evaluation
1997-03-01
contains standardized automated test methodology, synthetic stimuli and environments based on TECOM Ground Truth data and physics . The VPG is a distributed...Systems Acquisition Management (FSAM) coursebook , Defense Systems Management College, January 1994. Crocker, Charles M. “Application of the Simulation
NASA Astrophysics Data System (ADS)
Matsumoto, M.; Yoshimura, M.; Naoki, K.; Cho, K.; Wakabayashi, H.
2018-04-01
Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR) can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately) aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.
Simulation of brain tumors in MR images for evaluation of segmentation efficacy.
Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido
2009-04-01
Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).
Attractor learning in synchronized chaotic systems in the presence of unresolved scales
NASA Astrophysics Data System (ADS)
Wiegerinck, W.; Selten, F. M.
2017-12-01
Recently, supermodels consisting of an ensemble of interacting models, synchronizing on a common solution, have been proposed as an alternative to the common non-interactive multi-model ensembles in order to improve climate predictions. The connection terms in the interacting ensemble are to be optimized based on the data. The supermodel approach has been successfully demonstrated in a number of simulation experiments with an assumed ground truth and a set of good, but imperfect models. The supermodels were optimized with respect to their short-term prediction error. Nevertheless, they produced long-term climatological behavior that was close to the long-term behavior of the assumed ground truth, even in cases where the long-term behavior of the imperfect models was very different. In these supermodel experiments, however, a perfect model class scenario was assumed, in which the ground truth and imperfect models belong to the same model class and only differ in parameter setting. In this paper, we consider the imperfect model class scenario, in which the ground truth model class is more complex than the model class of imperfect models due to unresolved scales. We perform two supermodel experiments in two toy problems. The first one consists of a chaotically driven Lorenz 63 oscillator ground truth and two Lorenz 63 oscillators with constant forcings as imperfect models. The second one is more realistic and consists of a global atmosphere model as ground truth and imperfect models that have perturbed parameters and reduced spatial resolution. In both problems, we find that supermodel optimization with respect to short-term prediction error can lead to a long-term climatological behavior that is worse than that of the imperfect models. However, we also show that attractor learning can remedy this problem, leading to supermodels with long-term behavior superior to the imperfect models.
Ground-truth collections at the MTI core sites
NASA Astrophysics Data System (ADS)
Garrett, Alfred J.; Kurzeja, Robert J.; Parker, Matthew J.; O'Steen, Byron L.; Pendergast, Malcolm M.; Villa-Aleman, Eliel
2001-08-01
The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
NASA Astrophysics Data System (ADS)
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-01
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-21
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
WE-AB-202-09: Feasibility and Quantitative Analysis of 4DCT-Based High Precision Lung Elastography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasse, K; Neylon, J; Low, D
2016-06-15
Purpose: The purpose of this project is to derive high precision elastography measurements from 4DCT lung scans to facilitate the implementation of elastography in a radiotherapy context. Methods: 4DCT scans of the lungs were acquired, and breathing stages were subsequently registered to each other using an optical flow DIR algorithm. The displacement of each voxel gleaned from the registration was taken to be the ground-truth deformation. These vectors, along with the 4DCT source datasets, were used to generate a GPU-based biomechanical simulation that acted as a forward model to solve the inverse elasticity problem. The lung surface displacements were appliedmore » as boundary constraints for the model-guided lung tissue elastography, while the inner voxels were allowed to deform according to the linear elastic forces within the model. A biomechanically-based anisotropic convergence magnification technique was applied to the inner voxels in order to amplify the subtleties of the interior deformation. Solving the inverse elasticity problem was accomplished by modifying the tissue elasticity and iteratively deforming the biomechanical model. Convergence occurred when each voxel was within 0.5 mm of the ground-truth deformation and 1 kPa of the ground-truth elasticity distribution. To analyze the feasibility of the model-guided approach, we present the results for regions of low ventilation, specifically, the apex. Results: The maximum apical boundary expansion was observed to be between 2 and 6 mm. Simulating this expansion within an apical lung model, it was observed that 100% of voxels converged within 0.5 mm of ground-truth deformation, while 91.8% converged within 1 kPa of the ground-truth elasticity distribution. A mean elasticity error of 0.6 kPa illustrates the high precision of our technique. Conclusion: By utilizing 4DCT lung data coupled with a biomechanical model, high precision lung elastography can be accurately performed, even in low ventilation regions of the lungs. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144087.« less
Reanimating patients: cardio-respiratory CT and MR motion phantoms based on clinical CT patient data
NASA Astrophysics Data System (ADS)
Mayer, Johannes; Sauppe, Sebastian; Rank, Christopher M.; Sawall, Stefan; Kachelrieß, Marc
2017-03-01
Until today several algorithms have been developed that reduce or avoid artifacts caused by cardiac and respiratory motion in computed tomography (CT). The motion information is converted into so-called motion vector fields (MVFs) and used for motion compensation (MoCo) during the image reconstruction. To analyze these algorithms quantitatively there is the need for ground truth patient data displaying realistic motion. We developed a method to generate a digital ground truth displaying realistic cardiac and respiratory motion that can be used as a tool to assess MoCo algorithms. By the use of available MoCo methods we measured the motion in CT scans with high spatial and temporal resolution and transferred the motion information onto patient data with different anatomy or imaging modality, thereby reanimating the patient virtually. In addition to these images the ground truth motion information in the form of MVFs is available and can be used to benchmark the MVF estimation of MoCo algorithms. We here applied the method to generate 20 CT volumes displaying detailed cardiac motion that can be used for cone-beam CT (CBCT) simulations and a set of 8 MR volumes displaying respiratory motion. Our method is able to reanimate patient data virtually. In combination with the MVFs it serves as a digital ground truth and provides an improved framework to assess MoCo algorithms.
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M
2011-01-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140
NASA Astrophysics Data System (ADS)
Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.
2011-07-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.
Development of mine explosion ground truth smart sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Steven R.; Harben, Phillip E.; Jarpe, Steve
Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form themore » basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business plan). Key parameters to be automatically determined are event origin time (within 0.1 sec), location (within 1 km) and size (within 0.3 magnitude units) without any human intervention. The key parameter ground truth information from explosions greater than magnitude 2.5 will be transmitted to a recording and transmitting site. Because we have identified a limited bandwidth, inexpensive two-way satellite communication (ORBCOMM), we have devised the concept of an accompanying Ground-Truth Processing Center that would enable calibration and ground-truth accuracy to improve over the duration of a deployment.« less
Validation of neural spike sorting algorithms without ground-truth information.
Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F
2016-05-01
The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Soffientini, Chiara D; De Bernardi, Elisabetta; Casati, Rosangela; Baselli, Giuseppe; Zito, Felicia
2017-01-01
Design, realization, scan, and characterization of a phantom for PET Automatic Segmentation (PET-AS) assessment are presented. Radioactive zeolites immersed in a radioactive heterogeneous background simulate realistic wall-less lesions with known irregular shape and known homogeneous or heterogeneous internal activity. Three different zeolite families were evaluated in terms of radioactive uptake homogeneity, necessary to define activity and contour ground truth. Heterogeneous lesions were simulated by the perfect matching of two portions of a broken zeolite, soaked in two different 18 F-FDG radioactive solutions. Heterogeneous backgrounds were obtained with tissue paper balls and sponge pieces immersed into radioactive solutions. Natural clinoptilolite proved to be the most suitable zeolite for the construction of artificial objects mimicking homogeneous and heterogeneous uptakes in 18 F-FDG PET lesions. Heterogeneous backgrounds showed a coefficient of variation equal to 269% and 443% of a uniform radioactive solution. Assembled phantom included eight lesions with volumes ranging from 1.86 to 7.24 ml and lesion to background contrasts ranging from 4.8:1 to 21.7:1. A novel phantom for the evaluation of PET-AS algorithms was developed. It is provided with both reference contours and activity ground truth, and it covers a wide range of volumes and lesion to background contrasts. The dataset is open to the community of PET-AS developers and utilizers. © 2016 American Association of Physicists in Medicine.
ASM Based Synthesis of Handwritten Arabic Text Pages
Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059
Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.
Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K
2017-07-27
Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.
ASM Based Synthesis of Handwritten Arabic Text Pages.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.
Development of a Scalable Testbed for Mobile Olfaction Verification.
Zakaria, Syed Muhammad Mamduh Syed; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Yeon, Ahmad Shakaff Ali; Md Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah
2015-12-09
The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment.
Development of a Scalable Testbed for Mobile Olfaction Verification
Syed Zakaria, Syed Muhammad Mamduh; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Ali Yeon, Ahmad Shakaff; Md. Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah
2015-01-01
The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment. PMID:26690175
Ground truth crop proportion summaries for US segments, 1976-1979
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.
1981-01-01
The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.
Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V
2017-01-01
Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.
Semi-automated based ground-truthing GUI for airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Lydic, Rich; Moore, Tim; Trang, Anh; Agarwal, Sanjeev; Tiwari, Spandan
2005-06-01
Over the past several years, an enormous amount of airborne imagery consisting of various formats has been collected and will continue into the future to support airborne mine/minefield detection processes, improve algorithm development, and aid in imaging sensor development. The ground-truthing of imagery is a very essential part of the algorithm development process to help validate the detection performance of the sensor and improving algorithm techniques. The GUI (Graphical User Interface) called SemiTruth was developed using Matlab software incorporating signal processing, image processing, and statistics toolboxes to aid in ground-truthing imagery. The semi-automated ground-truthing GUI is made possible with the current data collection method, that is including UTM/GPS (Universal Transverse Mercator/Global Positioning System) coordinate measurements for the mine target and fiducial locations on the given minefield layout to support in identification of the targets on the raw imagery. This semi-automated ground-truthing effort has developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division, Airborne Application Branch with some support by the University of Missouri-Rolla.
Ground truth methods for optical cross-section modeling of biological aerosols
NASA Astrophysics Data System (ADS)
Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.
2011-05-01
Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.
2010-09-01
MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC EVENTS – UTILIZATION OF GROUND TRUTH INFORMATION Stephen J...and infrasound data from seismo-acoustic arrays and apply the methodology to regional networks for validation with ground truth information. In the...initial year of the project automated techniques for detecting, associating and locating infrasound signals were developed. Recently, the location
Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth
2018-01-01
Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.
Usability of calcium carbide gas pressure method in hydrological sciences
NASA Astrophysics Data System (ADS)
Arsoy, S.; Ozgur, M.; Keskin, E.; Yilmaz, C.
2013-10-01
Soil moisture is a key engineering variable with major influence on ecological and hydrological processes as well as in climate, weather, agricultural, civil and geotechnical applications. Methods for quantification of the soil moisture are classified into three main groups: (i) measurement with remote sensing, (ii) estimation via (soil water balance) simulation models, and (iii) measurement in the field (ground based). Remote sensing and simulation modeling require rapid ground truthing with one of the ground based methods. Calcium carbide gas pressure (CCGP) method is a rapid measurement procedure for obtaining soil moisture and relies on the chemical reaction of the calcium carbide reagent with the water in soil pores. However, the method is overlooked in hydrological science applications. Therefore, the purpose of this study is to evaluate the usability of the CCGP method in comparison with standard oven-drying and dielectric methods in terms of accuracy, time efficiency, operational ease, cost effectiveness and safety for quantification of the soil moisture over a wide range of soil types. The research involved over 250 tests that were carried out on 15 different soil types. It was found that the accuracy of the method is mostly within ±1% of soil moisture deviation range in comparison to oven-drying, and that CCGP method has significant advantages over dielectric methods in terms of accuracy, cost, operational ease and time efficiency for the purpose of ground truthing.
An Enhanced Collaborative-Software Environment for Information Fusion at the Unit of Action
2007-12-07
GRAY CONVOY RED CONVOY DISMOUNT SA-18 D O DISMOUNT W/ SURVEILANCE EQUIPDISMOUNT UNKNOWN zC-LFFRLIFEFORM < SA-18 FIXED WING CLASSIFIED INFORMATION...Ground Truth Semantic-aggregation hierarchy (evaluation-use only) BSGs GTGs BSOS GTOS Reports Figure 4: Semantic-Aggregation Hierarchy PIR/SIR CIFAR...Finally, GTOs can be aggregated into GTGs (ground-truth groups) using the provided ground-truth force structure hierarchy for GTOs. GTGs can only be
NASA Astrophysics Data System (ADS)
Stoker, C.; Dunagan, S.; Stevens, T.; Amils, R.; Gómez-Elvira, J.; Fernández, D.; Hall, J.; Lynch, K.; Cannon, H.; Zavaleta, J.; Glass, B.; Lemke, L.
2004-03-01
The results of an drilling experiment to search for a subsurface biosphere in a pyritic mineral deposit at Rio Tinto, Spain, are described. The experiment provides ground truth for a simulation of a Mars drilling mission to search for subsurface life.
Community annotation experiment for ground truth generation for the i2b2 medication challenge
Solti, Imre; Xia, Fei; Cadag, Eithon
2010-01-01
Objective Within the context of the Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records, the authors (also referred to as ‘the i2b2 medication challenge team’ or ‘the i2b2 team’ for short) organized a community annotation experiment. Design For this experiment, the authors released annotation guidelines and a small set of annotated discharge summaries. They asked the participants of the Third i2b2 Workshop to annotate 10 discharge summaries per person; each discharge summary was annotated by two annotators from two different teams, and a third annotator from a third team resolved disagreements. Measurements In order to evaluate the reliability of the annotations thus produced, the authors measured community inter-annotator agreement and compared it with the inter-annotator agreement of expert annotators when both the community and the expert annotators generated ground truth based on pooled system outputs. For this purpose, the pool consisted of the three most densely populated automatic annotations of each record. The authors also compared the community inter-annotator agreement with expert inter-annotator agreement when the experts annotated raw records without using the pool. Finally, they measured the quality of the community ground truth by comparing it with the expert ground truth. Results and conclusions The authors found that the community annotators achieved comparable inter-annotator agreement to expert annotators, regardless of whether the experts annotated from the pool. Furthermore, the ground truth generated by the community obtained F-measures above 0.90 against the ground truth of the experts, indicating the value of the community as a source of high-quality ground truth even on intricate and domain-specific annotation tasks. PMID:20819855
Dual-Tracer PET Using Generalized Factor Analysis of Dynamic Sequences
Fakhri, Georges El; Trott, Cathryn M.; Sitek, Arkadiusz; Bonab, Ali; Alpert, Nathaniel M.
2013-01-01
Purpose With single-photon emission computed tomography, simultaneous imaging of two physiological processes relies on discrimination of the energy of the emitted gamma rays, whereas the application of dual-tracer imaging to positron emission tomography (PET) imaging has been limited by the characteristic 511-keV emissions. Procedures To address this limitation, we developed a novel approach based on generalized factor analysis of dynamic sequences (GFADS) that exploits spatio-temporal differences between radiotracers and applied it to near-simultaneous imaging of 2-deoxy-2-[18F]fluoro-D-glucose (FDG) (brain metabolism) and 11C-raclopride (D2) with simulated human data and experimental rhesus monkey data. We show theoretically and verify by simulation and measurement that GFADS can separate FDG and raclopride measurements that are made nearly simultaneously. Results The theoretical development shows that GFADS can decompose the studies at several levels: (1) It decomposes the FDG and raclopride study so that they can be analyzed as though they were obtained separately. (2) If additional physiologic/anatomic constraints can be imposed, further decomposition is possible. (3) For the example of raclopride, specific and nonspecific binding can be determined on a pixel-by-pixel basis. We found good agreement between the estimated GFADS factors and the simulated ground truth time activity curves (TACs), and between the GFADS factor images and the corresponding ground truth activity distributions with errors less than 7.3±1.3 %. Biases in estimation of specific D2 binding and relative metabolism activity were within 5.9±3.6 % compared to the ground truth values. We also evaluated our approach in simultaneous dual-isotope brain PET studies in a rhesus monkey and obtained accuracy of better than 6 % in a mid-striatal volume, for striatal activity estimation. Conclusions Dynamic image sequences acquired following near-simultaneous injection of two PET radiopharmaceuticals can be separated into components based on the differences in the kinetics, provided their kinetic behaviors are distinct. PMID:23636489
NASA Technical Reports Server (NTRS)
Edgerton, A. T.; Trexler, D. T.; Sakamoto, S.; Jenkins, J. E.
1969-01-01
The field measurement program conducted at the NASA/USGS Southern California Test Site is discussed. Ground truth data and multifrequency microwave brightness data were acquired by a mobile field laboratory operating in conjunction with airborne instruments. The ground based investigations were performed at a number of locales representing a variety of terrains including open desert, cultivated fields, barren fields, portions of the San Andreas Fault Zone, and the Salton Sea. The measurements acquired ground truth data and microwave brightness data at wavelengths of 0.8 cm, 2.2 cm, and 21 cm.
Modeling Being "Lost": Imperfect Situation Awareness
NASA Technical Reports Server (NTRS)
Middleton, Victor E.
2011-01-01
Being "lost" is an exemplar of imperfect Situation Awareness/Situation Understanding (SA/SU) -- information/knowledge that is uncertain, incomplete, and/or just wrong. Being "lost" may be a geo-spatial condition - not knowing/being wrong about where to go or how to get there. More broadly, being "lost" can serve as a metaphor for uncertainty and/or inaccuracy - not knowing/being wrong about how one fits into a larger world view, what one wants to do, or how to do it. This paper discusses using agent based modeling (ABM) to explore imperfect SA/SU, simulating geo-spatially "lost" intelligent agents trying to navigate in a virtual world. Each agent has a unique "mental map" -- its idiosyncratic view of its geo-spatial environment. Its decisions are based on this idiosyncratic view, but behavior outcomes are based on ground truth. Consequently, the rate and degree to which an agent's expectations diverge from ground truth provide measures of that agent's SA/SU.
A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.
2015-11-15
Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammi, A; Weber, D; Lomax, A
2016-06-15
Purpose: In clinical pencil-beam-scanned (PBS) proton therapy, the advantage of the characteristic sharp dose fall-off after the Bragg Peak (BP) becomes a disadvantage if the BP positions of a plan’s constituent pencil beams are shifted, eg.due to anatomical changes. Thus, for fractionated PBS proton therapy, accurate knowledge of the water equivalent path length (WEPL) of the traversed anatomy is critical. In this work we investigate the feasibility of using 2D proton range maps (proton radiography, PR) with the active-scanning gantry at PSI. Methods: We simulated our approach using Monte Carlo methods (MC) to simulate proton beam interactions in patients usingmore » clinical imaging data. We selected six head and neck cases having significant anatomical changes detected in per-treatment CTs.PRs (two at 0°/90°) were generated from MC simulations of low-dose pencil beams at 230MeV. Each beam’s residual depth-dose was propagated through the patient geometry (from CT) and detected on exiting the patient anatomy in an ideal depth-resolved detector (eg. range telescope). Firstly, to validate the technique, proton radiographs were compared to the ground truth, which was the WEPL from ray-tracing in the patient CT at the pencil beam location. Secondly, WEPL difference maps (per-treatment – planning imaging timepoints) were then generated to locate the anatomical changes, both in the CT (ground truth) and in the PRs. Binomial classification was performed to evaluate the efficacy of the technique relative to CT. Results: Over the projections simulated over all six patients, 70%, 79% and 95% of the grid points agreed with the ground truth proton range to within ±0.5%, ±1%, and ±3% respectively. The sensitivity, specificity, precision and accuracy were high (mean±1σ, 83±8%, 87±13%, 95±10%, 83±7% respectively). Conclusion: We show that proton-based radiographic images can accurately monitor patient positioning and in vivo range verification, while providing equivalent WEPL information to a CT scan, with the advantage of a much lower imaging dose.« less
Relating ground truth collection to model sensitivity
NASA Technical Reports Server (NTRS)
Amar, Faouzi; Fung, Adrian K.; Karam, Mostafa A.; Mougin, Eric
1993-01-01
The importance of collecting high quality ground truth before a SAR mission over a forested area is two fold. First, the ground truth is used in the analysis and interpretation of the measured backscattering properties; second, it helps to justify the use of a scattering model to fit the measurements. Unfortunately, ground truth is often collected based on visual assessment of what is perceived to be important without regard to the mission itself. Sites are selected based on brief surveys of large areas, and the ground truth is collected by a process of selecting and grouping different scatterers. After the fact, it may turn out that some of the relevant parameters are missing. A three-layer canopy model based on the radiative transfer equations is used to determine, before hand, the relevant parameters to be collected. Detailed analysis of the contribution to scattering and attenuation of various forest components is carried out. The goal is to identify the forest parameters which most influence the backscattering as a function of frequency (P-, L-, and C-bands) and incident angle. The influence on backscattering and attenuation of branch diameters, lengths, angular distribution, and permittivity; trunk diameters, lengths, and permittivity; and needle sizes, their angular distribution, and permittivity are studied in order to maximize the efficiency of the ground truth collection efforts. Preliminary results indicate that while a scatterer may not contribute to the total backscattering, its contribution to attenuation may be significant depending on the frequency.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Wendy; Ren, Lei, E-mail: lei.ren@duke.edu; Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of themore » deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR = 20. For patient data, average tracking errors were less than 2 mm in all directions for all patients. Conclusions: Preliminary studies demonstrated the feasibility of generating real-time VC-MRI for on-board localization of moving targets in radiation therapy.« less
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2013-01-01
An analytic perturbation method is introduced for estimating the lightning ground flash fraction in a set of N lightning flashes observed by a satellite lightning mapper. The value of N is large, typically in the thousands, and the observations consist of the maximum optical group area produced by each flash. The method is tested using simulated observations that are based on Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) data. National Lightning Detection NetworkTM (NLDN) data is used to determine the flash-type (ground or cloud) of the satellite-observed flashes, and provides the ground flash fraction truth for the simulation runs. It is found that the mean ground flash fraction retrieval errors are below 0.04 across the full range 0-1 under certain simulation conditions. In general, it is demonstrated that the retrieval errors depend on many factors (i.e., the number, N, of satellite observations, the magnitude of random and systematic measurement errors, and the number of samples used to form certain climate distributions employed in the model).
NASA Ground-Truthing Capabilities Demonstrated
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Seibert, Marc A.
2004-01-01
NASA Research and Education Network (NREN) ground truthing is a method of verifying the scientific validity of satellite images and clarifying irregularities in the imagery. Ground-truthed imagery can be used to locate geological compositions of interest for a given area. On Mars, astronaut scientists could ground truth satellite imagery from the planet surface and then pinpoint optimum areas to explore. These astronauts would be able to ground truth imagery, get results back, and use the results during extravehicular activity without returning to Earth to process the data from the mission. NASA's first ground-truthing experiment, performed on June 25 in the Utah desert, demonstrated the ability to extend powerful computing resources to remote locations. Designed by Dr. Richard Beck of the Department of Geography at the University of Cincinnati, who is serving as the lead field scientist, and assisted by Dr. Robert Vincent of Bowling Green State University, the demonstration also involved researchers from the NASA Glenn Research Center and the NASA Ames Research Center, who worked with the university field scientists to design, perform, and analyze results of the experiment. As shown real-time Hyperion satellite imagery (data) is sent to a mass storage facility, while scientists at a remote (Utah) site upload ground spectra (data) to a second mass storage facility. The grid pulls data from both mass storage facilities and performs up to 64 simultaneous band ratio conversions on the data. Moments later, the results from the grid are accessed by local scientists and sent directly to the remote science team. The results are used by the remote science team to locate and explore new critical compositions of interest. The process can be repeated as required to continue to validate the data set or to converge on alternate geophysical areas of interest.
Morris, Alan; Burgon, Nathan; McGann, Christopher; MacLeod, Robert; Cates, Joshua
2013-01-01
Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar formation, which are important factors for predicting patient outcome and planning of redo ablation procedures. We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the performance of multivariate clustering on many combinations of image texture. Algorithm performance was determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth datasets considered easy, variability from the ground truth was low; for those considered difficult, variability from ground truth was on par with the variability across experts. PMID:24236224
NASA Astrophysics Data System (ADS)
Perry, Daniel; Morris, Alan; Burgon, Nathan; McGann, Christopher; MacLeod, Robert; Cates, Joshua
2012-03-01
Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar formation, which are important factors for predicting patient outcome and planning of redo ablation procedures. We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the performance of multivariate clustering on many combinations of image texture. Algorithm performance was determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth datasets considered easy, variability from the ground truth was low; for those considered difficult, variability from ground truth was on par with the variability across experts.
NASA Technical Reports Server (NTRS)
Freeman, Anthony; Dubois, Pascale; Leberl, Franz; Norikane, L.; Way, Jobea
1991-01-01
Viewgraphs on Geographic Information System for fusion and analysis of high-resolution remote sensing and ground truth data are presented. Topics covered include: scientific objectives; schedule; and Geographic Information System.
Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe
2012-09-01
In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological (18)F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of (18)F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of (18)F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax-abdomen phantom and imaged with a PET-CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites' PET threshold segmentations in terms of Dice index and volume error. The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R(2) = 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the (18)F-FDG solution is able to saturate the zeolite pores and that the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.
NASA Technical Reports Server (NTRS)
Anikouchine, W. A. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Radiance profiles drawn along cruise tracks have been examined for use in correlating digital radiance levels with ground truth data. Preliminary examination results are encouraging. Adding weighted levels from the 4 MSS bands appears to enhance specular surface reflections while rendering sensor noise white. Comparing each band signature to the added specular signature ought to enhance non-specular effects caused by ocean turbidity. Preliminary examination of radiance profiles and ground truth turbidity measurements revealed substantial correlation.
Isolated effect of geometry on mitral valve function for in silico model development.
Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj
2015-01-01
Computational models for the heart's mitral valve (MV) exhibit several uncertainties that may be reduced by further developing these models using ground-truth data-sets. This study generated a ground-truth data-set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle (PM) displacement and asymmetric PM displacement on leaflet coaptation, mitral regurgitation (MR) and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile haemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction and anterior leaflet strain in the radial and circumferential directions were successfully quantified at increasing levels of geometric distortion. From these data, increase in the levels of isolated PM displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase) and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric PM displacement. Asymmetric PM displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data-set may be used to parametrically evaluate and develop modelling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools.
A Ground Truthing Method for AVIRIS Overflights Using Canopy Absorption Spectra
NASA Technical Reports Server (NTRS)
Gamon, John A.; Serrano, Lydia; Roberts, Dar A.; Ustin, Susan L.
1996-01-01
Remote sensing for ecological field studies requires ground truthing for accurate interpretation of remote imagery. However, traditional vegetation sampling methods are time consuming and hard to relate to the scale of an AVIRIS scene. The large errors associated with manual field sampling, the contrasting formats of remote and ground data, and problems with coregistration of field sites with AVIRIS pixels can lead to difficulties in interpreting AVIRIS data. As part of a larger study of fire risk in the Santa Monica Mountains of southern California, we explored a ground-based optical method of sampling vegetation using spectrometers mounted both above and below vegetation canopies. The goal was to use optical methods to provide a rapid, consistent, and objective means of "ground truthing" that could be related both to AVIRIS imagery and to conventional ground sampling (e.g., plot harvests and pigment assays).
Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander
2009-07-01
Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).
Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.
2016-01-01
Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849
NASA Astrophysics Data System (ADS)
Bonaccorsi, Rosalba; Stoker, Carol R.
2008-10-01
Science results from a field-simulated lander payload and post-mission laboratory investigations provided "ground truth" to interpret remote science observations made as part of the 2005 Mars Astrobiology Research and Technology Experiment (MARTE) drilling mission simulation. The experiment was successful in detecting evidence for life, habitability, and preservation potential of organics in a relevant astrobiological analogue of Mars. Science results. Borehole 7 was drilled near the Río Tinto headwaters at Peña de Hierro (Spain) in the upper oxidized remnant of an acid rock drainage system. Analysis of 29 cores (215 cm of core was recovered from 606 cm penetrated depth) revealed a matrix of goethite- (42-94%) and hematite-rich (47-87%) rocks with pockets of phyllosilicates (47-74%) and fine- to coarse-grained loose material. Post-mission X-ray diffraction (XRD) analysis confirmed the range of hematite:goethite mixtures that were visually recognizable (˜1:1, ˜1:2, and ˜1:3 mixtures displayed a yellowish-red color whereas 3:1 mixtures displayed a dark reddish-brown color). Organic carbon was poorly preserved in hematite/goethite-rich materials (Corg <0.05 wt %) beneath the biologically active organic-rich soil horizon (Corg ˜3-11 wt %) in contrast to the phyllosilicate-rich zones (Corg ˜0.23 wt %). Ground truth vs. remote science analysis. Laboratory-based analytical results were compared to the analyses obtained by a Remote Science Team (RST) using a blind protocol. Ferric iron phases, lithostratigraphy, and inferred geologic history were correctly identified by the RST with the exception of phyllosilicate-rich materials that were misinterpreted as weathered igneous rock. Adenosine 5‧-triphosphate (ATP) luminometry, a tool available to the RST, revealed ATP amounts above background noise, i.e., 278-876 Relative Luminosity Units (RLUs) in only 6 cores, whereas organic carbon was detected in all cores. Our manned vs. remote observations based on automated acquisitions during the project provide insights for the preparation of future astrobiology-driven Mars missions.
Bonaccorsi, Rosalba; Stoker, Carol R
2008-10-01
Science results from a field-simulated lander payload and post-mission laboratory investigations provided "ground truth" to interpret remote science observations made as part of the 2005 Mars Astrobiology Research and Technology Experiment (MARTE) drilling mission simulation. The experiment was successful in detecting evidence for life, habitability, and preservation potential of organics in a relevant astrobiological analogue of Mars. SCIENCE RESULTS: Borehole 7 was drilled near the Río Tinto headwaters at Peña de Hierro (Spain) in the upper oxidized remnant of an acid rock drainage system. Analysis of 29 cores (215 cm of core was recovered from 606 cm penetrated depth) revealed a matrix of goethite- (42-94%) and hematite-rich (47-87%) rocks with pockets of phyllosilicates (47-74%) and fine- to coarse-grained loose material. Post-mission X-ray diffraction (XRD) analysis confirmed the range of hematite:goethite mixtures that were visually recognizable (approximately 1:1, approximately 1:2, and approximately 1:3 mixtures displayed a yellowish-red color whereas 3:1 mixtures displayed a dark reddish-brown color). Organic carbon was poorly preserved in hematite/goethite-rich materials (C(org) <0.05 wt %) beneath the biologically active organic-rich soil horizon (C(org) approximately 3-11 wt %) in contrast to the phyllosilicate-rich zones (C(org) approximately 0.23 wt %). GROUND TRUTH VS. REMOTE SCIENCE ANALYSIS: Laboratory-based analytical results were compared to the analyses obtained by a Remote Science Team (RST) using a blind protocol. Ferric iron phases, lithostratigraphy, and inferred geologic history were correctly identified by the RST with the exception of phyllosilicate-rich materials that were misinterpreted as weathered igneous rock. Adenosine 5'-triphosphate (ATP) luminometry, a tool available to the RST, revealed ATP amounts above background noise, i.e., 278-876 Relative Luminosity Units (RLUs) in only 6 cores, whereas organic carbon was detected in all cores. Our manned vs. remote observations based on automated acquisitions during the project provide insights for the preparation of future astrobiology-driven Mars missions.
Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D
2008-01-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
As-built design specification for segment map (Sgmap) program
NASA Technical Reports Server (NTRS)
Tompkins, M. A. (Principal Investigator)
1981-01-01
The segment map program (SGMAP), which is part of the CLASFYT package, is described in detail. This program is designed to output symbolic maps or numerical dumps from LANDSAT cluster/classification files or aircraft ground truth/processed ground truth files which are in 'universal' format.
Investigations using data from LANDSAT 2. [Bangladesh
NASA Technical Reports Server (NTRS)
Hossain, A. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Ground truth data collected in coastal areas confirm the sedimentation base line at five fathom depth and less. Forestry ground truth at Supati in Sunderbasn was found to conform with stratifications in aerial photos and in some satellite imagery.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Gu, X; Lu, W
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Yin, F; Harris, W
Purpose: To develop a technique generating ultrafast on-board VC-MRI using prior 4D-MRI and on-board phase-skipped encoding k-space acquisition for real-time 3D target tracking of liver and lung radiotherapy. Methods: The end-of-expiration (EOE) volume in 4D-MRI acquired during the simulation was selected as the prior volume. 3 major respiratory deformation patterns were extracted through the principal component analysis of the deformation field maps (DFMs) generated between EOE and all other phases. The on-board VC-MRI at each instant was considered as a deformation of the prior volume, and the deformation was modeled as a linear combination of the extracted 3 major deformationmore » patterns. To solve the weighting coefficients of the 3 major patterns, a 2D slice was extracted from VC-MRI volume to match with the 2D on-board sampling data, which was generated by 8-fold phase skipped-encoding k-space acquisition (i.e., sample 1 phase-encoding line out of every 8 lines) to achieve an ultrafast 16–24 volumes/s frame rate. The method was evaluated using XCAT digital phantom to simulate lung cancer patients. The 3D volume of end-ofinhalation (EOI) phase at the treatment day was used as ground-truth onboard VC-MRI with simulated changes in 1) breathing amplitude and 2) breathing amplitude/phase change from the simulation day. A liver cancer patient case was evaluated for in-vivo feasibility demonstration. Results: The comparison between ground truth and estimated on-board VC-MRI shows good agreements. In XCAT study with changed breathing amplitude, the volume-percent-difference(VPD) between ground-truth and estimated tumor volumes at EOI was 6.28% and the Center-of-Mass-Shift(COMS) was 0.82mm; with changed breathing amplitude and phase, the VPD was 8.50% and the COMS was 0.54mm. The study of liver patient case also demonstrated a promising in vivo feasibility of the proposed method Conclusion: Preliminary results suggest the feasibility to estimate ultrafast VC-MRI for on-board target localization with phase skipped-encoding k-space acquisition. Research grant from NIH R01-184173.« less
Detailed analysis of CAMS procedures for phase 3 using ground truth inventories
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1979-01-01
The results of a study of Procedure 1 as used during LACIE Phase 3 are presented. The study was performed by comparing the Procedure 1 classification results with digitized ground-truth inventories. The proportion estimation accuracy, dot labeling accuracy, and clustering effectiveness are discussed.
Ground Truth Studies. Teacher Handbook. Second Edition.
ERIC Educational Resources Information Center
Boyce, Jesse; And Others
Ground Truth Studies is an interdisciplinary activity-based program that draws on the broad range of sciences that make up the study of global change and the complementary technology of remote sensing. It integrates local environmental issues with global change topics, such as the greenhouse effect, loss of biological diversity, and ozone…
Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J
2016-12-08
The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1978-01-01
Procedures for gathering ground truth information for a supervised approach to a computer-implemented land cover classification of LANDSAT acquired multispectral scanner data are provided in a step by step manner. Criteria for determining size, number, uniformity, and predominant land cover of training sample sites are established. Suggestions are made for the organization and orientation of field team personnel, the procedures used in the field, and the format of the forms to be used. Estimates are made of the probable expenditures in time and costs. Examples of ground truth forms and definitions and criteria of major land cover categories are provided in appendixes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porras-Chaverri, M; University of Costa Rica, San Jose; Galavis, P
Purpose: Evaluate mammographic mean glandular dose (MGD) coefficients for particular known tissue distributions using a novel formalism that incorporates the effect of the heterogeneous glandular tissue distribution, by comparing them with MGD coefficients derived from the corresponding anthropomorphic computer breast phantom. Methods: MGD coefficients were obtained using MCNP5 simulations with the currently used homogeneous assumption and the heterogeneously-layered breast (HLB) geometry and compared against those from the computer phantom (ground truth). The tissue distribution for the HLB geometry was estimated using glandularity map image pairs corrected for the presence of non-glandular fibrous tissue. Heterogeneity of tissue distribution was quantified usingmore » the glandular tissue distribution index, Idist. The phantom had 5 cm compressed breast thickness (MLO and CC views) and 29% whole breast glandular percentage. Results: Differences as high as 116% were found between the MGD coefficients with the homogeneous breast core assumption and those from the corresponding ground truth. Higher differences were found for cases with more heterogeneous distribution of glandular tissue. The Idist for all cases was in the [−0.8{sup −}+0.3] range. The use of the methods presented in this work results in better agreement with ground truth with an improvement as high as 105 pp. The decrease in difference across all phantom cases was in the [9{sup −}105] pp range, dependent on the distribution of glandular tissue and was larger for the cases with the highest Idist values. Conclusion: Our results suggest that the use of corrected glandularity image pairs, as well as the HLB geometry, improves the estimates of MGD conversion coefficients by accounting for the distribution of glandular tissue within the breast. The accuracy of this approach with respect to ground truth is highly dependent on the particular glandular tissue distribution studied. Predrag Bakic discloses current funding from NIH, NSF, and DoD, former funding from Real Time Tomography, LLC and a current research collaboration with Barco and Hologic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; Levesque, I R.; Larkin, J
Purpose: To produce multi-modality compatible, realistic datasets for the joint evaluation of segmentation and registration with a reliable ground truth using a 4D biomechanical lung phantom. The further development of a computer controlled air flow system for recreation of real patient breathing patterns is incorporated for additional evaluation of motion prediction algorithms. Methods: A pair of preserved porcine lungs was pneumatically manipulated using an in-house computer controlled respirator. The respirator consisted of a set of bellows actuated by a 186 W computer controlled industrial motor. Patient breathing traces were recorded using a respiratory bellows belt during CT simulation and inputmore » into a control program incorporating a proportional-integral-derivative (PID) feedback controller in LabVIEW. Mock tumors were created using dual compartment vacuum sealed sea sponges. 65% iohexol,a gadolinium-based contrast agent and 18F-FDG were used to produce contrast and thus determine a segmentation ground truth. The intensity distributions of the compartments were then digitally matched for the final dataset. A bifurcation tracking pipeline provided a registration ground truth using the bronchi of the lung. The lungs were scanned using a GE Discovery-ST PET/CT scanner and a Phillips Panorama 0.23T MRI using a T1 weighted 3D fast field echo (FFE) protocol. Results: The standard deviation of the error between the patient breathing trace and the encoder feedback from the respirator was found to be ±4.2%. Bifurcation tracking error using CT (0.97×0.97×3.27 mm{sup 3} resolution) was found to be sub-voxel up to 7.8 cm displacement for human lungs and less than 1.32 voxel widths in any axis up to 2.3 cm for the porcine lungs. Conclusion: An MRI/PET/CT compatible anatomically and temporally realistic swine lung phantom was developed for the evaluation of simultaneous registration and segmentation algorithms. With the addition of custom software and mock tumors, the entire package offers ground truths for benchmarking performance with high fidelity.« less
NASA Technical Reports Server (NTRS)
1974-01-01
Varied small scale imagery was used for detecting and assessing damage by the southern pine beetle. The usefulness of ERTS scanner imagery for vegetation classification and pine beetle damage detection and assessment is evaluated. Ground truth acquisition for forest identification using multispectral aerial photographs is reviewed.
NASA Astrophysics Data System (ADS)
Salamuniccar, G.; Loncaric, S.
2008-03-01
The Catalogue from our previous work was merged with the date of Barlow, Rodionova, Boyce, and Kuzmin. The resulting ground truth catalogue with 57,633 craters was registered, using MOLA data, with THEMIS-DIR, MDIM, and MOC data-sets.
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Automated microwave ablation therapy planning with single and multiple entry points
NASA Astrophysics Data System (ADS)
Liu, Sheena X.; Dalal, Sandeep; Kruecker, Jochen
2012-02-01
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment. Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin. We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy and robustness of the described planning algorithm.
Isolated Effect of Geometry on Mitral Valve Function for In-Silico Model Development
Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj
2013-01-01
Computational models for the heart’s mitral valve (MV) exhibit several uncertainties which may be reduced by further developing these models using ground-truth data sets. The present study generated a ground-truth data set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle displacement, and asymmetric papillary muscle displacement on leaflet coaptation, mitral regurgitation (MR), and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile hemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction, and anterior leaflet strain in the radial and circumferential directions were successfully quantified for increasing levels of geometric distortion. From these data, increasing levels of isolated papillary muscle displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase), and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric papillary muscle displacement. Asymmetric papillary muscle displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data may be used to parametrically evaluate and develop modeling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools. PMID:24059354
Image resolution: Its significance in a wildland area
NASA Technical Reports Server (NTRS)
Lauer, D. T.; Thaman, R. R.
1970-01-01
The information content of simulated space photos as a function of various levels of image resolution was determined by identifying major vegetation-terrain types in a series of images purposely degraded optically to different levels of ground resolution resolvable distance. Comparison of cumulative interpretation results with actual ground truth data indicates that although there is definite decrease in interpretability as ground resolvable distance increases, some valuable information is gained by using even the poorest aerial photography. Developed is the importance of shape and texture for correct identification of broadleaf or coniferous vegetation types and the relative unimportance of shape and texture for the recognition of grassland, water bodies, and nonvegetated areas. Imagery must have a ground resolvable distance of at least 50 feet to correctly discriminate between primary types of woody vegetation.
The Need for Careful Data Collection for Pattern Recognition in Digital Pathology.
Marée, Raphaël
2017-01-01
Effective pattern recognition requires carefully designed ground-truth datasets. In this technical note, we first summarize potential data collection issues in digital pathology and then propose guidelines to build more realistic ground-truth datasets and to control their quality. We hope our comments will foster the effective application of pattern recognition approaches in digital pathology.
USDA-ARS?s Scientific Manuscript database
Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that recent ground truth data could be used to extrapolate pre...
Leibniz on the metaphysical foundation of physics
NASA Astrophysics Data System (ADS)
Temple, Daniel R.
This thesis examines how and why Leibniz felt that physics must be grounded in metaphysics. I argue that one of the strongest motivation Leibniz had for attempting to ground physics in metaphysics was his concern over the problem of induction. Even in his early writings, Leibniz was well aware of the problem of induction and how this problem threatened the very possibility of physics. Both his early and later theories of truth are geared towards solving this deep problem in the philosophy of science. In his early theory of truth, all truths are ultimately grounded in (but not necessarily reducible to) an identity. Hence, all truths are ultimately based in logic. Consequently, the problem of induction is seemingly solved since everything that happens, happens with the force of logical necessity. Unfortunately, this theory is incompatible with Leibniz's theory of possible worlds and hence, jeopardizes the liberty of God. In Leibniz's later theory of truth, Leibniz tries to overcome this weakness by acknowledging truths that are grounded in the free but moral necessity of God's actions. Since God's benevolence is responsible for the actualization of this world, then this world must possess rational laws. Furthermore, since God's rationality ensures that everything obeys the principle of sufficient reason, then we can use this principle to determine the fundamental laws of the universe. Leibniz himself attempts to derive these laws using this principle. Kant attempted to continue this work of securing the possibility of science, and the problems he encountered helped to shape his critical philosophy. So I conclude by a comparative analysis of Leibniz and Kant on the foundations of physics.
Unmanned aerial vehicle-based structure from motion biomass inventory estimates
NASA Astrophysics Data System (ADS)
Bedell, Emily; Leslie, Monique; Fankhauser, Katie; Burnett, Jonathan; Wing, Michael G.; Thomas, Evan A.
2017-04-01
Riparian vegetation restoration efforts require cost-effective, accurate, and replicable impact assessments. We present a method to use an unmanned aerial vehicle (UAV) equipped with a GoPro digital camera to collect photogrammetric data of a 0.8-ha riparian restoration. A three-dimensional point cloud was created from the photos using "structure from motion" techniques. The point cloud was analyzed and compared to traditional, ground-based monitoring techniques. Ground-truth data were collected on 6.3% of the study site and averaged across the entire site to report stem heights in stems/ha in three height classes. The project site was divided into four analysis sections, one for derivation of parameters used in the UAV data analysis and the remaining three sections reserved for method validation. Comparing the ground-truth data to the UAV generated data produced an overall error of 21.6% and indicated an R2 value of 0.98. A Bland-Altman analysis indicated a 95% probability that the UAV stems/section result will be within 61 stems/section of the ground-truth data. The ground-truth data are reported with an 80% confidence interval of ±1032 stems/ha thus, the UAV was able to estimate stems well within this confidence interval.
SLAPex Freeze/Thaw 2015: The First Dedicated Soil Freeze/Thaw Airborne Campaign
NASA Technical Reports Server (NTRS)
Kim, Edward; Wu, Albert; DeMarco, Eugenia; Powers, Jarrett; Berg, Aaron; Rowlandson, Tracy; Freeman, Jacqueline; Gottfried, Kurt; Toose, Peter; Roy, Alexandre;
2016-01-01
Soil freezing and thawing is an important process in the terrestrial water, energy, and carbon cycles, marking the change between two very different hydraulic, thermal, and biological regimes. NASA's Soil Moisture Active/Passive (SMAP) mission includes a binary freeze/thaw data product. While there have been ground-based remote sensing field measurements observing soil freeze/thaw at the point scale, and airborne campaigns that observed some frozen soil areas (e.g., BOREAS), the recently-completed SLAPex Freeze/Thaw (F/T) campaign is the first airborne campaign dedicated solely to observing frozen/thawed soil with both passive and active microwave sensors and dedicated ground truth, in order to enable detailed process-level exploration of the remote sensing signatures and in situ soil conditions. SLAPex F/T utilized the Scanning L-band Active/Passive (SLAP) instrument, an airborne simulator of SMAP developed at NASA's Goddard Space Flight Center, and was conducted near Winnipeg, Manitoba, Canada, in October/November, 2015. Future soil moisture missions are also expected to include soil freeze/thaw products, and the loss of the radar on SMAP means that airborne radar-radiometer observations like those that SLAP provides are unique assets for freeze/thaw algorithm development. This paper will present an overview of SLAPex F/T, including descriptions of the site, airborne and ground-based remote sensing, ground truth, as well as preliminary results.
Law, Andrew J.; Sharma, Gaurav; Schieber, Marc H.
2014-01-01
We present a methodology for detecting effective connections between simultaneously recorded neurons using an information transmission measure to identify the presence and direction of information flow from one neuron to another. Using simulated and experimentally-measured data, we evaluate the performance of our proposed method and compare it to the traditional transfer entropy approach. In simulations, our measure of information transmission outperforms transfer entropy in identifying the effective connectivity structure of a neuron ensemble. For experimentally recorded data, where ground truth is unavailable, the proposed method also yields a more plausible connectivity structure than transfer entropy. PMID:21096617
Regolith Volatile Recovery at Simulated Lunar Environments
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Paulsen, Gale; Zacny, Kris; Schmidt, Sherry; Boucher, Dale
2016-01-01
Lunar Polar Volatiles: Permanently shadowed craters at the lunar poles contain water, 5 wt according to LCROSS. Interest in water for ISRU applications. Desire to ground truth water using surface prospecting e.g. Resource Prospector and RESOLVE. How to access subsurface water resources and accurately measure quantity. Excavation operations and exposure to lunar environment may affect the results. Volatile capture tests: A series a ground based dirty thermal vacuum tests are being conducted to better understand the subsurface sampling operations. Sample removal and transfer. Volatiles loss during sampling operations. Concept of operations, Instrumentation. This presentation is a progress report on volatiles capture results from these tests with lunar polar drill prototype hardware.
Aeolian dunes as ground truth for atmospheric modeling on Mars
Hayward, R.K.; Titus, T.N.; Michaels, T.I.; Fenton, L.K.; Colaprete, A.; Christensen, P.R.
2009-01-01
Martian aeolian dunes preserve a record of atmosphere/surface interaction on a variety of scales, serving as ground truth for both Global Climate Models (GCMs) and mesoscale climate models, such as the Mars Regional Atmospheric Modeling System (MRAMS). We hypothesize that the location of dune fields, expressed globally by geographic distribution and locally by dune centroid azimuth (DCA), may record the long-term integration of atmospheric activity across a broad area, preserving GCM-scale atmospheric trends. In contrast, individual dune morphology, as expressed in slipface orientation (SF), may be more sensitive to localized variations in circulation, preserving topographically controlled mesoscale trends. We test this hypothesis by comparing the geographic distribution, DCA, and SF of dunes with output from the Ames Mars GCM and, at a local study site, with output from MRAMS. When compared to the GCM: 1) dunes generally lie adjacent to areas with strongest winds, 2) DCA agrees fairly well with GCM modeled wind directions in smooth-floored craters, and 3) SF does not agree well with GCM modeled wind directions. When compared to MRAMS modeled winds at our study site: 1) DCA generally coincides with the part of the crater where modeled mean winds are weak, and 2) SFs are consistent with some weak, topographically influenced modeled winds. We conclude that: 1) geographic distribution may be valuable as ground truth for GCMs, 2) DCA may be useful as ground truth for both GCM and mesoscale models, and 3) SF may be useful as ground truth for mesoscale models. Copyright 2009 by the American Geophysical Union.
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Soil moisture ground truth: Steamboat Springs, Colorado, site and Walden, Colorado, site
NASA Technical Reports Server (NTRS)
Jones, E. B.
1976-01-01
Ground-truth data taken at Steamboat Springs and Walden, Colorado in support of the NASA missions in these areas during the period March 8, 1976 through March 11, 1976 was presented. This includes the following information: snow course data for Steamboat Springs and Walden, snow pit and snow quality data for Steamboat Springs, and soil moisture report.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
NASA Technical Reports Server (NTRS)
Smith, R.
1975-01-01
Wallops Station accepted the tasks of providing ground truth to several ERTS investigators, operating a DCP repair depot, designing and building an airborne DCP Data Acquisition System, and providing aircraft underflight support for several other investigators. Additionally, the data bank is generally available for use by ERTS and other investigators that have a scientific interest in data pertaining to the Chesapeake Bay area. Working with DCS has provided a means of evaluating the system as a data collection device possibly applicable to ongoing Earth Resources Program activities in the Chesapeake Bay area as well as providing useful data and services to other ERTS investigators. The two areas of technical support provided by Wallops, ground truth stations and repair for DCPs, are briefly discussed.
Truth-Telling, Ritual Culture, and Latino College Graduates in the Anthropocene
ERIC Educational Resources Information Center
Gildersleeve, Ryan Evely
2017-01-01
This article seeks to trace the cartography of truth-telling through a posthuamanist predicament of ritual culture in higher education and critical inquiry. Ritual culture in higher education such as graduation ceremony produces and reflects the realities of becoming subjects. These spaces are proliferating grounds for truth telling and practical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panfil, J; Patel, R; Surucu, M
Purpose: To compare markerless template-based tracking of lung tumors using dual energy (DE) cone-beam computed tomography (CBCT) projections versus single energy (SE) CBCT projections. Methods: A RANDO chest phantom with a simulated tumor in the upper right lung was used to investigate the effectiveness of tumor tracking using DE and SE CBCT projections. Planar kV projections from CBCT acquisitions were captured at 60 kVp (4 mAs) and 120 kVp (1 mAs) using the Varian TrueBeam and non-commercial iTools Capture software. Projections were taken at approximately every 0.53° while the gantry rotated. Due to limitations of the phantom, angles for whichmore » the shoulders blocked the tumor were excluded from tracking analysis. DE images were constructed using a weighted logarithmic subtraction that removed bony anatomy while preserving soft tissue structures. The tumors were tracked separately on DE and SE (120 kVp) images using a template-based tracking algorithm. The tracking results were compared to ground truth coordinates designated by a physician. Matches with a distance of greater than 3 mm from ground truth were designated as failing to track. Results: 363 frames were analyzed. The algorithm successfully tracked the tumor on 89.8% (326/363) of DE frames compared to 54.3% (197/363) of SE frames (p<0.0001). Average distance between tracking and ground truth coordinates was 1.27 +/− 0.67 mm for DE versus 1.83+/−0.74 mm for SE (p<0.0001). Conclusion: This study demonstrates the effectiveness of markerless template-based tracking using DE CBCT. DE imaging resulted in better detectability with more accurate localization on average versus SE. Supported by a grant from Varian Medical Systems.« less
Circular tomosynthesis for neuro perfusion imaging on an interventional C-arm
NASA Astrophysics Data System (ADS)
Claus, Bernhard E.; Langan, David A.; Al Assad, Omar; Wang, Xin
2015-03-01
There is a clinical need to improve cerebral perfusion assessment during the treatment of ischemic stroke in the interventional suite. The clinician is able to determine whether the arterial blockage was successfully opened but is unable to sufficiently assess blood flow through the parenchyma. C-arm spin acquisitions can image the cerebral blood volume (CBV) but are challenged to capture the temporal dynamics of the iodinated contrast bolus, which is required to derive, e.g., cerebral blood flow (CBF) and mean transit time (MTT). Here we propose to utilize a circular tomosynthesis acquisition on the C-arm to achieve the necessary temporal sampling of the volume at the cost of incomplete data. We address the incomplete data problem by using tools from compressed sensing and incorporate temporal interpolation to improve our temporal resolution. A CT neuro perfusion data set is utilized for generating a dynamic (4D) volumetric model from which simulated tomo projections are generated. The 4D model is also used as a ground truth reference for performance evaluation. The performance that may be achieved with the tomo acquisition and 4D reconstruction (under simulation conditions, i.e., without considering data fidelity limitations due to imaging physics and imaging chain) is evaluated. In the considered scenario, good agreement between the ground truth and the tomo reconstruction in the parenchyma was achieved.
Evaluation criteria for software classification inventories, accuracies, and maps
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.
Severe Thunderstorm and Tornado Warnings at Raleigh, North Carolina.
NASA Astrophysics Data System (ADS)
Hoium, Debra K.; Riordan, Allen J.; Monahan, John; Keeter, Kermit K.
1997-11-01
The National Weather Service issues public warnings for severe thunderstorms and tornadoes when these storms appear imminent. A study of the warning process was conducted at the National Weather Service Forecast Office at Raleigh, North Carolina, from 1994 through 1996. The purpose of the study was to examine the decision process by documenting the types of information leading to decisions to warn or not to warn and by describing the sequence and timing of events in the development of warnings. It was found that the evolution of warnings followed a logical sequence beginning with storm monitoring and proceeding with increasingly focused activity. For simplicity, information input to the process was categorized as one of three types: ground truth, radar reflectivity, or radar velocity.Reflectivity, velocity, and ground truth were all equally likely to initiate the investigation process. This investigation took an average of 7 min, after which either a decision was made not to warn or new information triggered the warning. Decisions not to issue warnings were based more on ground truth and reflectivity than radar velocity products. Warnings with investigations of more than 2 min were more likely to be triggered by radar reflectivity, than by velocity or ground truth. Warnings with a shorter investigation time, defined here as "immediate trigger warnings," were less frequently based on velocity products and more on ground truth information. Once the decision was made to warn, it took an average of 2.1 min to prepare the warning text. In 85% of cases when warnings were issued, at least one contact was made to emergency management officials or storm spotters in the warned county. Reports of severe weather were usually received soon after the warning was transmitted-almost half of these within 30 min after issue. A total of 68% were received during the severe weather episode, but some of these storm reports later proved false according to Storm Data.Even though the WSR-88D is a sophisticated tool, ground truth information was found to be a vital part of the warning process. However, the data did not indicate that population density was statistically correlated either with the number of warnings issued or the verification rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurwitz, M; Williams, C; Dhou, S
Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion ofmore » the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206.« less
Blecha, Kevin A.; Alldredge, Mat W.
2015-01-01
Animal space use studies using GPS collar technology are increasingly incorporating behavior based analysis of spatio-temporal data in order to expand inferences of resource use. GPS location cluster analysis is one such technique applied to large carnivores to identify the timing and location of feeding events. For logistical and financial reasons, researchers often implement predictive models for identifying these events. We present two separate improvements for predictive models that future practitioners can implement. Thus far, feeding prediction models have incorporated a small range of covariates, usually limited to spatio-temporal characteristics of the GPS data. Using GPS collared cougar (Puma concolor) we include activity sensor data as an additional covariate to increase prediction performance of feeding presence/absence. Integral to the predictive modeling of feeding events is a ground-truthing component, in which GPS location clusters are visited by human observers to confirm the presence or absence of feeding remains. Failing to account for sources of ground-truthing false-absences can bias the number of predicted feeding events to be low. Thus we account for some ground-truthing error sources directly in the model with covariates and when applying model predictions. Accounting for these errors resulted in a 10% increase in the number of clusters predicted to be feeding events. Using a double-observer design, we show that the ground-truthing false-absence rate is relatively low (4%) using a search delay of 2–60 days. Overall, we provide two separate improvements to the GPS cluster analysis techniques that can be expanded upon and implemented in future studies interested in identifying feeding behaviors of large carnivores. PMID:26398546
A method to map errors in the deformable registration of 4DCT images1
Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.
2010-01-01
Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288
Rossen, Lauren M; Pollack, Keshia M; Curriero, Frank C
2012-09-01
Obtaining valid and accurate data on community food environments is critical for research evaluating associations between the food environment and health outcomes. This study utilized ground-truthing and remote-sensing technology to validate a food outlet retail list obtained from an urban local health department in Baltimore, Maryland in 2009. Ten percent of outlets (n=169) were assessed, and differences in accuracy were explored by neighborhood characteristics (96 census tracts) to determine if discrepancies were differential or non-differential. Inaccuracies were largely unrelated to a variety of neighborhood-level variables, with the exception of number of vacant housing units. Although remote-sensing technologies are a promising low-cost alternative to direct observation, this study demonstrated only moderate levels of agreement with ground-truthing. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
The importance of ground truth data in remote sensing
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1972-01-01
Surface observation data is discussed as an essential part of remote sensing research. One of the most important aspects of ground truth is the collection of measurements and observations about the type, size, condition and other physical or chemical properties of importance concerning the materials on the earth's surface that are being sensed remotely. The use of a variety of sensor systems in combination at different altitudes is emphasized.
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Software Suite to Support In-Flight Characterization of Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Holekamp, Kara; Gasser, Gerald; Tabor, Wes; Vaughan, Ronald; Ryan, Robert; Pagnutti, Mary; Blonski, Slawomir; Kenton, Ross
2014-01-01
A characterization software suite was developed to facilitate NASA's in-flight characterization of commercial remote sensing systems. Characterization of aerial and satellite systems requires knowledge of ground characteristics, or ground truth. This information is typically obtained with instruments taking measurements prior to or during a remote sensing system overpass. Acquired ground-truth data, which can consist of hundreds of measurements with different data formats, must be processed before it can be used in the characterization. Accurate in-flight characterization of remote sensing systems relies on multiple field data acquisitions that are efficiently processed, with minimal error. To address the need for timely, reproducible ground-truth data, a characterization software suite was developed to automate the data processing methods. The characterization software suite is engineering code, requiring some prior knowledge and expertise to run. The suite consists of component scripts for each of the three main in-flight characterization types: radiometric, geometric, and spatial. The component scripts for the radiometric characterization operate primarily by reading the raw data acquired by the field instruments, combining it with other applicable information, and then reducing it to a format that is appropriate for input into MODTRAN (MODerate resolution atmospheric TRANsmission), an Air Force Research Laboratory-developed radiative transport code used to predict at-sensor measurements. The geometric scripts operate by comparing identified target locations from the remote sensing image to known target locations, producing circular error statistics defined by the Federal Geographic Data Committee Standards. The spatial scripts analyze a target edge within the image, and produce estimates of Relative Edge Response and the value of the Modulation Transfer Function at the Nyquist frequency. The software suite enables rapid, efficient, automated processing of ground truth data, which has been used to provide reproducible characterizations on a number of commercial remote sensing systems. Overall, this characterization software suite improves the reliability of ground-truth data processing techniques that are required for remote sensing system in-flight characterizations.
ERIC Educational Resources Information Center
Sadd, James; Morello-Frosch, Rachel; Pastor, Manuel; Matsuoka, Martha; Prichard, Michele; Carter, Vanessa
2014-01-01
Environmental justice advocates often argue that environmental hazards and their health effects vary by neighborhood, income, and race. To assess these patterns and advance preventive policy, their colleagues in the research world often use complex and methodologically sophisticated statistical and geospatial techniques. One way to bridge the gap…
Truth in Testing Legislation and Private Property Concepts.
ERIC Educational Resources Information Center
Burns, Daniel J.
1981-01-01
Truth in testing laws are subject to challenge on the grounds that they invade federally protected rights and interests of the test-makers through the due process clauses of the Constitution and federal copyright protections. (Author/MLF)
GEOS-3 phase B ground truth summary
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Goodman, L. R.
1975-01-01
Ground truth data collected during the experiment systems calibration and evaluation phase of the Geodynamics experimental Ocean Satellite (GEOS-3) experiment are summarized. Both National Weather Service analyses and aircraft sensor data are included. The data are structured to facilitate the use of the various data products in calibrating the GEOS-3 radar altimeter and in assessing the altimeter's sensitivity to geophysical phenomena. Brief statements are made concerning the quality and completeness of the included data.
Radar modeling of a boreal forest
NASA Technical Reports Server (NTRS)
Chauhan, Narinder S.; Lang, Roger H.; Ranson, K. J.
1991-01-01
Microwave modeling, ground truth, and SAR data are used to investigate the characteristics of forest stands. A mixed coniferous forest stand has been modeled at P, L, and C bands. Extensive measurements of ground truth and canopy geometry parameters were performed in a 200-m-square hemlock-dominated forest plot. About 10 percent of the trees were sampled to determine a distribution of diameter at breast height (DBH). Hemlock trees in the forest are modeled by characterizing tree trunks, branches, and needles as randomly oriented lossy dielectric cylinders whose area and orientation distributions are prescribed. The distorted Born approximation is used to compute the backscatter at P, L, and C bands. The theoretical results are found to be lower than the calibrated ground-truth data. The experiment and model results agree quite closely, however, when the ratios of VV to HH and HV to HH are compared.
Interactive degraded document enhancement and ground truth generation
NASA Astrophysics Data System (ADS)
Bal, G.; Agam, G.; Frieder, O.; Frieder, G.
2008-01-01
Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1
NASA Technical Reports Server (NTRS)
Edwardo, H. A.; Moulis, F. R.; Merry, C. J.; Mckim, H. L.; Kerber, A. G.; Miller, M. A.
1985-01-01
The Pittsburgh District, Corps of Engineers, has conducted feasibility analyses of various procedures for performing flood damage assessments along the main stem of the Ohio River. Procedures using traditional, although highly automated, techniques and those based on geographic information systems have been evaluated at a test site, the City of New Martinsville, Wetzel County, WV. The flood damage assessments of the test site developed from an automated, conventional structure-by-structure appraisal served as the ground truth data set. A geographic information system was developed for the test site which includes data on hydraulic reach, ground and reference flood elevations, and land use/cover. Damage assessments were made using land use mapping developed from an exhaustive field inspection of each tax parcel. This ground truth condition was considered to provide the best comparison of flood damages to the conventional approach. Also, four land use/cover data sets were developed from Thematic Mapper Simulator (TMS) and Landsat-4 Thematic Mapper (TM) data. One of these was also used to develop a damage assessment of the test site. This paper presents the comparative absolute and relative accuracies of land use/cover mapping and flood damage assessments, and the recommended role of geographic information systems aided by remote sensing for conducting flood damage assessments and updates along the main stem of the Ohio River.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J; Qi, S; Sheng, K
2014-06-15
Purpose: To develop a GPU-based framework that can generate highresolution and patient-specific biomechanical models from a given simulation CT and contoured structures, optimized to run at interactive speeds, for addressing adaptive radiotherapy objectives. Method: A Massspring-damping (MSD) model was generated from a given simulation CT. The model's mass elements were generated for every voxel of anatomy, and positioned in a deformation space in the GPU memory. MSD connections were established between neighboring mass elements in a dense distribution. Contoured internal structures allowed control over elastic material properties of different tissues. Once the model was initialized in GPU memory, skeletal anatomymore » was actuated using rigid-body transformations, while soft tissues were governed by elastic corrective forces and constraints, which included tensile forces, shear forces, and spring damping forces. The model was validated by applying a known load to a soft tissue block and comparing the observed deformation to ground truth calculations from established elastic mechanics. Results: Our analyses showed that both local and global load experiments yielded results with a correlation coefficient R{sup 2} > 0.98 compared to ground truth. Models were generated for several anatomical regions. Head and neck models accurately simulated posture changes by rotating the skeletal anatomy in three dimensions. Pelvic models were developed for realistic deformations for changes in bladder volume. Thoracic models demonstrated breast deformation due to gravity when changing treatment position from supine to prone. The GPU framework performed at greater than 30 iterations per second for over 1 million mass elements with up to 26 MSD connections each. Conclusions: Realistic simulations of site-specific, complex posture and physiological changes were simulated at interactive speeds using patient data. Incorporating such a model with live patient tracking would facilitate real time assessment of variations of the actual anatomy and delivered dose for adaptive intervention and re-planning.« less
Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS
NASA Astrophysics Data System (ADS)
Willison, A.; Bedard, D.
This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.
The gene normalization task in BioCreative III
2011-01-01
Background We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). Results We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. Conclusions By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance. PMID:22151901
The gene normalization task in BioCreative III.
Lu, Zhiyong; Kao, Hung-Yu; Wei, Chih-Hsuan; Huang, Minlie; Liu, Jingchen; Kuo, Cheng-Ju; Hsu, Chun-Nan; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Okazaki, Naoaki; Cho, Han-Cheol; Gerner, Martin; Solt, Illes; Agarwal, Shashank; Liu, Feifan; Vishnyakova, Dina; Ruch, Patrick; Romacker, Martin; Rinaldi, Fabio; Bhattacharya, Sanmitra; Srinivasan, Padmini; Liu, Hongfang; Torii, Manabu; Matos, Sergio; Campos, David; Verspoor, Karin; Livingston, Kevin M; Wilbur, W John
2011-10-03
We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance.
2012-05-07
AFRL-RV-PS- AFRL-RV-PS- TP-2012-0017 TP-2012-0017 MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC...ASSOCIATION AND LOCATION OF 5a. CONTRACT NUMBER FA8718-08-C-0008 INFRASOUND AND SEISMO-ACOUSTIC EVENT – UTILIZATION OF GROUND-TRUTH... infrasound signals from both correlated and uncorrelated noise. Approaches to this problem are implementation of the F-detector, which employs the F
Johnson, Cordell; Swarzenski, Peter W.; Richardson, Christina M.; Smith, Christopher G.; Kroeger, Kevin D.; Ganguli, Priya M.
2015-01-01
Rigorous ground-truthing at each field site showed that multi-channel electrcial resistivity techniques can reproduce the scales and dynamics of a seepage field when such data are correctly collected, and when the model inversions are tuned to field site characteristics. Such information can provide a unique perspective on the scales and dynamics of exchange processes within a coastal aquifer—information essential to scientists and resource managers alike.
On Evaluating Brain Tissue Classifiers without a Ground Truth
Martin-Fernandez, Marcos; Ungar, Lida; Nakamura, Motoaki; Koo, Min-Seong; McCarley, Robert W.; Shenton, Martha E.
2009-01-01
In this paper, we present a set of techniques for the evaluation of brain tissue classifiers on a large data set of MR images of the head. Due to the difficulty of establishing a gold standard for this type of data, we focus our attention on methods which do not require a ground truth, but instead rely on a common agreement principle. Three different techniques are presented: the Williams’ index, a measure of common agreement; STAPLE, an Expectation Maximization algorithm which simultaneously estimates performance parameters and constructs an estimated reference standard; and Multidimensional Scaling, a visualization technique to explore similarity data. We apply these different evaluation methodologies to a set eleven different segmentation algorithms on forty MR images. We then validate our evaluation pipeline by building a ground truth based on human expert tracings. The evaluations with and without a ground truth are compared. Our findings show that comparing classifiers without a gold standard can provide a lot of interesting information. In particular, outliers can be easily detected, strongly consistent or highly variable techniques can be readily discriminated, and the overall similarity between different techniques can be assessed. On the other hand, we also find that some information present in the expert segmentations is not captured by the automatic classifiers, suggesting that common agreement alone may not be sufficient for a precise performance evaluation of brain tissue classifiers. PMID:17532646
NASA Astrophysics Data System (ADS)
Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong
2017-02-01
Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.
NASA Astrophysics Data System (ADS)
Staver, John R.
2010-03-01
Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant chapter in humans' cultural heritage before and since the Enlightenment. Truth, knowledge, and their relation are central to science and religion as ways of knowing, as social institutions, and to their interaction. In religion, truth is revealed through God's word. In science, truth is sought after via empirical methods. Discord can be viewed as a competition for social legitimization between two social institutions whose goals are explaining the world and how it works. Under this view, the root of the discord is truth as correspondence. In this concept of truth, knowledge corresponds to the facts of reality, and conflict is inevitable for many because humans want to ask which one—science or religion—gets the facts correct. But, the root paradox, also known as the problem of the criterion, suggests that seeking to know nature as it is represents a fruitless endeavor. The discord can be set on new ground and resolved by taking a moderately skeptical line of thought, one which employs truth as coherence and a moderate form of constructivist epistemology. Quantum mechanics and evolution as scientific theories and scientific research on human consciousness and vision provide support for this line of argument. Within a constructivist perspective, scientists would relinquish only the pursuit of knowing reality as it is. Scientists would retain everything else. Believers who hold that religion explains reality would come to understand that God never revealed His truth of nature; rather, He revealed His truth in how we are to conduct our lives.
Comparison of preliminary results from Airborne Aster Simulator (AAS) with TIMS data
NASA Technical Reports Server (NTRS)
Kannari, Yoshiaki; Mills, Franklin; Watanabe, Hiroshi; Ezaka, Teruya; Narita, Tatsuhiko; Chang, Sheng-Huei
1992-01-01
The Japanese Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), being developed for a NASA EOS-A satellite, will have 3 VNIR, 6 SWIR, and 5 TIR (8-12 micron) bands. An Airborne ASTER Simulator (AAS) was developed for Japan Resources Observation System Organization (JAROS) by the Geophysical Environmental Research Group (GER) Corp. to research surface temperature and emission features in the MWIR/TIR, to simulate ASTER's TIR bands, and to study further possibility of MWIR/TIR bands. ASTER Simulator has 1 VNIR, 3 MWIR (3-5 microns), and 20 (currently 24) TIR bands. Data was collected over 3 sites - Cuprite, Nevada; Long Valley/Mono Lake, California; and Death Valley, California - with simultaneous ground truth measurements. Preliminary data collected by AAS for Cuprite, Nevada is presented and AAS data is compared with Thermal Infrared Multispectral Scanner (TIMS) data.
2000-09-01
and the Porphyry Copper District (PCD) of east central Arizona and south west New Mexico were used in gathering ground truth ranging from mine records...previous studies of large coal cast blasting operations in Wyoming that trigger the IMS (Hedlin et al. 2000), the porphyry copper region of Arizona and...local mines producing the sources. Close cooperation has been developed with the Phelps Dodge mines in Morenci, Arizona and Tyrone, New Mexico where in
NASA Technical Reports Server (NTRS)
Brook, M.
1986-01-01
An optical lightning detector was constructed and flown, along with Vinton cameras and a Fairchild Line Scan Spectrometer, on a U-2 during the summer of 1979. The U-2 lightning data was obtained in daylight, and was supplemented with ground truth taken at Langmuir Laboratory. Simulations were prepared as required to establish experiment operating procedures and science training for the astronauts who would operate the Night/Day Optical Survey of Thunderstorm Lightning (NOSL) equipment during the STS-2 NOSL experiment on the Space Shuttle. Data was analyzed and papers were prepared for publication.
Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery
NASA Technical Reports Server (NTRS)
Rose, William I.
1993-01-01
Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.
Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.
1982-01-01
A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.
Software for Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2002-01-01
A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.
Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2004-01-01
A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.
New SHARE 2010 HSI-LiDAR dataset: re-calibration, detection assessment and delivery
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.
2016-09-01
This paper revisits hyperspectral data collected from the SpecTIR hyperspectral airborne Rochester Experiment (SHARE) in 2010. It has been determined that there were calibration issues in the SWIR portion of the data. This calibration issue is discussed and has been rectified. Approaches for calibration to radiance and compensation to reflectance are discussed based on in-scene information and radiative transfer codes. In addition to the entire flight line, a much large target detection test and evaluation chip has been created which includes an abundance of potential false alarms. New truth masks are created along with results from target detection algorithms. Co-registered LiDAR data is also presented. Finally, all ground truth information (ground photos, metadata, MODTRAN tape5, ASD ground spectral measurements, target truth masks, etc.), in addition to the HSI flight lines and co-registered LiDAR data, has been organized, packaged and uploaded to the Center for Imaging Science / Digital Imaging and Remote Sensing Lab web server for public use.
Canny edge-based deformable image registration
NASA Astrophysics Data System (ADS)
Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping
2017-02-01
This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.
First- and third-party ground truth for key frame extraction from consumer video clips
NASA Astrophysics Data System (ADS)
Costello, Kathleen; Luo, Jiebo
2007-02-01
Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.
The ground truth about metadata and community detection in networks.
Peel, Leto; Larremore, Daniel B; Clauset, Aaron
2017-05-01
Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.
Daepp, Madeleine Ig; Black, Jennifer
2017-10-01
The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.
NASA Technical Reports Server (NTRS)
Turner, B. J.; Austin, G. L.
1993-01-01
Three-dimensional radar data for three summer Florida storms are used as input to a microwave radiative transfer model. The model simulates microwave brightness observations by a 19-GHz, nadir-pointing, satellite-borne microwave radiometer. The statistical distribution of rainfall rates for the storms studied, and therefore the optimal conversion between microwave brightness temperatures and rainfall rates, was found to be highly sensitive to the spatial resolution at which observations were made. The optimum relation between the two quantities was less sensitive to the details of the vertical profile of precipitation. Rainfall retrievals were made for a range of microwave sensor footprint sizes. From these simulations, spatial sampling-error estimates were made for microwave radiometers over a range of field-of-view sizes. The necessity of matching the spatial resolution of ground truth to radiometer footprint size is emphasized. A strategy for the combined use of raingages, ground-based radar, microwave, and visible-infrared (VIS-IR) satellite sensors is discussed.
A computer program for the simulation of heat and moisture flow in soils
NASA Technical Reports Server (NTRS)
Camillo, P.; Schmugge, T. J.
1981-01-01
A computer program that simulates the flow of heat and moisture in soils is described. The space-time dependence of temperature and moisture content is described by a set of diffusion-type partial differential equations. The simulator uses a predictor/corrector to numerically integrate them, giving wetness and temperature profiles as a function of time. The simulator was used to generate solutions to diffusion-type partial differential equations for which analytical solutions are known. These equations include both constant and variable diffusivities, and both flux and constant concentration boundary conditions. In all cases, the simulated and analytic solutions agreed to within the error bounds which were imposed on the integrator. Simulations of heat and moisture flow under actual field conditions were also performed. Ground truth data were used for the boundary conditions and soil transport properties. The qualitative agreement between simulated and measured profiles is an indication that the model equations are reasonably accurate representations of the physical processes involved.
A Systematic Approach for Real-Time Operator Functional State Assessment
NASA Technical Reports Server (NTRS)
Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean; Li, Jiang; Li, Feng; McKenzie, Frederick
2012-01-01
A task overload condition often leads to high stress for an operator, causing performance degradation and possibly disastrous consequences. Just as dangerous, with automated flight systems, an operator may experience a task underload condition (during the en-route flight phase, for example), becoming easily bored and finding it difficult to maintain sustained attention. When an unexpected event occurs, either internal or external to the automated system, the disengaged operator may neglect, misunderstand, or respond slowly/inappropriately to the situation. In this paper, we discuss an approach for Operator Functional State (OFS) monitoring in a typical aviation environment. A systematic ground truth finding procedure has been designed based on subjective evaluations, performance measures, and strong physiological indicators. The derived OFS ground truth is continuous in time compared to a very sparse estimation of OFS based on an expert review or subjective evaluations. It can capture the variations of OFS during a mission to better guide through the training process of the OFS assessment model. Furthermore, an OFS assessment model framework based on advanced machine learning techniques was designed and the systematic approach was then verified and validated with experimental data collected in a high fidelity Boeing 737 simulator. Preliminary results show highly accurate engagement/disengagement detection making it suitable for real-time applications to assess pilot engagement.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Svoboda, David; Ulman, Vladimir
2017-01-01
The proper analysis of biological microscopy images is an important and complex task. Therefore, it requires verification of all steps involved in the process, including image segmentation and tracking algorithms. It is generally better to verify algorithms with computer-generated ground truth datasets, which, compared to manually annotated data, nowadays have reached high quality and can be produced in large quantities even for 3D time-lapse image sequences. Here, we propose a novel framework, called MitoGen, which is capable of generating ground truth datasets with fully 3D time-lapse sequences of synthetic fluorescence-stained cell populations. MitoGen shows biologically justified cell motility, shape and texture changes as well as cell divisions. Standard fluorescence microscopy phenomena such as photobleaching, blur with real point spread function (PSF), and several types of noise, are simulated to obtain realistic images. The MitoGen framework is scalable in both space and time. MitoGen generates visually plausible data that shows good agreement with real data in terms of image descriptors and mean square displacement (MSD) trajectory analysis. Additionally, it is also shown in this paper that four publicly available segmentation and tracking algorithms exhibit similar performance on both real and MitoGen-generated data. The implementation of MitoGen is freely available.
Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh
2013-01-01
In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800
Image quality specification and maintenance for airborne SAR
NASA Astrophysics Data System (ADS)
Clinard, Mark S.
2004-08-01
Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.
NASA Technical Reports Server (NTRS)
Jones, E. B.
1975-01-01
The soil moisture ground-truth measurements and ground-cover descriptions taken at three soil moisture survey sites located near Lafayette, Indiana; St. Charles, Missouri; and Centralia, Missouri are given. The data were taken on November 10, 1975, in connection with airborne remote sensing missions being flown by the Environmental Research Institute of Michigan under the auspices of the National Aeronautics and Space Administration. Emphasis was placed on the soil moisture in bare fields. Soil moisture was sampled in the top 0 to 1 in. and 0 to 6 in. by means of a soil sampling push tube. These samples were then placed in plastic bags and awaited gravimetric analysis.
Snowpack ground truth: Radar test site, Steamboat Springs, Colorado, 8-16 April 1976
NASA Technical Reports Server (NTRS)
Howell, S.; Jones, E. B.; Leaf, C. F.
1976-01-01
Ground-truth data taken at Steamboat Springs, Colorado is presented. Data taken during the period April 8, 1976 - April 16, 1976 included the following: (1) snow depths and densities at selected locations (using a Mount Rose snow tube); (2) snow pits for temperature, density, and liquid water determinations using the freezing calorimetry technique and vertical layer classification; (3) snow walls were also constructed of various cross sections and documented with respect to sizes and snow characteristics; (4) soil moisture at selected locations; and (5) appropriate air temperature and weather data.
Standardized UXO Technology Demonstration Site Moguls Scoring Record Number 912 (Sky Research, Inc.)
2008-09-01
south from the northern end point. 8 2) A metallic pin-flag is placed over the midpoint. 3) The operator logs data along the same path...buried UXO or other metallic debris. A 5-meter-length of line is walked in eight cardinal directions (N-S, S-N, E-W, W-E, SE-NW, NW-SE, SW-NE, NE-SW...points have been rounded to protect the ground truth. The overall ground truth is composed of ferrous and nonferrous anomalies. Due to limitations
NASA Astrophysics Data System (ADS)
Crossley, D. J.; Borsa, A. A.; Murphy, T.
2017-12-01
We continue the analysis of superconducting gravimeter (SG) and GPS data at Apache Point Observatory (APO) as part of the astrophysical effort to reduce LLR errors to the mm level. With 8 years of data accumulated, the main impediment to getting benefit from the SG data is the assessment of the hydrology signal that arises mainly from the attraction of local water masses close to the site. Traditional SG processing attempts to remove as much signal as possible from the loading and attraction contributions, but we are limited at APO because here is no hydrology ground truth. Nevertheless, we produce a gravity residual that corresponds to some extent with the rather noisy vertical GPS data from Plate Boundary Site PB07 close to Sunspot observatory 2 km from APO. The main goal of this paper, apart from updating the gravity and GPS correction using recent models, is to construct simulated SG and GPS time series from the synthetic source functions - ground uplift, hydrology attraction and loading - and to perform an inversion to see what can be recovered of the vertical ground motion. The simulation will also include a first look at the effect of this synthetic local data on the current Planetary Ephemeris Program solution for the lunar distance.
The ground truth about metadata and community detection in networks
Peel, Leto; Larremore, Daniel B.; Clauset, Aaron
2017-01-01
Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures. PMID:28508065
Comparing the accuracy of food outlet datasets in an urban environment.
Wong, Michelle S; Peyton, Jennifer M; Shields, Timothy M; Curriero, Frank C; Gudzune, Kimberly A
2017-05-11
Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location) has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA) and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP), an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV) of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7%) and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%). We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.
NASA Astrophysics Data System (ADS)
Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.
1996-12-01
Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.
The Conflict between Science and Religion: A Discussion on the Possibilities for Settlement
ERIC Educational Resources Information Center
Falcao, Eliane Brigida Morais
2010-01-01
In his article "Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?", John Staver identifies what he considers to be the source of the conflicts between science and religion: the establishment of the relationship between truth and knowledge, from the perspective of those…
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
Simonsen, Daniel; Nielsen, Ida F; Spaich, Erika G; Andersen, Ole K
2017-05-02
The present paper describes the design and evaluation of an automated version of the Modified Jebsen Test of Hand Function (MJT) based on the Microsoft Kinect sensor. The MJT was administered twice to 11 chronic stroke subjects with varying degrees of hand function deficits. The test times of the MJT were evaluated manually by a therapist using a stopwatch, and automatically using the Microsoft Kinect sensor. The ground truth times were assessed based on inspection of the video-recordings. The agreement between the methods was evaluated along with the test-retest performance. The results from Bland-Altman analysis showed better agreement between the ground truth times and the automatic MJT time evaluations compared to the agreement between the ground truth times and the times estimated by the therapist. The results from the test-retest performance showed that the subjects significantly improved their performance in several subtests of the MJT, indicating a practice effect. The results from the test showed that the Kinect can be used for automating the MJT.
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
NASA Astrophysics Data System (ADS)
Klump, Jens; Robertson, Jess
2016-04-01
The spatial and temporal extent of geological phenomena makes experiments in geology difficult to conduct, if not entirely impossible and collection of data is laborious and expensive - so expensive that most of the time we cannot test a hypothesis. The aim, in many cases, is to gather enough data to build a predictive geological model. Even in a mine, where data are abundant, a model remains incomplete because the information at the level of a blasting block is two orders of magnitude larger than the sample from a drill core, and we have to take measurement errors into account. So, what confidence can we have in a model based on sparse data, uncertainties and measurement error? Our framework consist of two layers: (a) a ground-truth layer that contains geological models, which can be statistically based on historical operations data, and (b) a network of RESTful synthetic sensor microservices which can query the ground-truth for underlying properties and produce a simulated measurement to a control layer, which could be a database or LIMS, a machine learner or a companies' existing data infrastructure. Ground truth data are generated by an implicit geological model which serves as a host for nested models of geological processes as smaller scales. Our two layers are implemented using Flask and Gunicorn, which are open source Python web application framework and server, the PyData stack (numpy, scipy etc) and Rabbit MQ (an open-source queuing library). Sensor data is encoded using a JSON-LD version of the SensorML and Observations and Measurements standards. Containerisation of the synthetic sensors using Docker and CoreOS allows rapid and scalable deployment of large numbers of sensors, as well as sensor discovery to form a self-organized dynamic network of sensors. Real-time simulation of data sources can be used to investigate crucial questions such as the potential information gain from future sensing capabilities, or from new sampling strategies, or the combination of both, and it enables us to test many "what if?" questions, both in geology and in data engineering. What would we be able to see if we could obtain data at higher resolution? How would real-time data analysis change sampling strategies? Does our data infrastructure handle many new real-time data streams? What feature engineering can be deducted for machine learning approaches? By providing a 'data sandbox' able to scale to realistic geological scenarios we hope to start answering some of these questions. Faults happen in real world networks. Future work will investigate the effect of failure on dynamic sensor networks and the impact on the predictive capability of machine learning algorithms.
AMS Ground Truth Measurements: Calibrations and Test Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasiolek, Piotr T.
2015-12-01
Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima NPP accident in March-May 2011. To map ground contamination, a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count data, expressed in counts per second (cps), need to be converted to a terrestrial component ofmore » the exposure rate at 1 meter (m) above ground, or surface activity of the isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large-scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, because production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish a common calibration line very early into the event. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and that are potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.« less
AMS Ground Truth Measurements: Calibration and Test Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasiolek, P.
2013-11-01
Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima nuclear power plant (NPP) accident in March-May 2011. To map ground contamination a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count rate data expressed in counts per second (cps) needs to be converted tomore » the terrestrial component of the exposure rate 1 m above ground, or surface activity of isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, as the production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish very early into the event a common calibration line. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements. This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.« less
The GSFC Mark-2 three band hand-held radiometer. [thematic mapper for ground truth data collection
NASA Technical Reports Server (NTRS)
Tucker, C. J.; Jones, W. H.; Kley, W. A.; Sundstrom, G. J.
1980-01-01
A self-contained, portable, hand-radiometer designed for field usage was constructed and tested. The device, consisting of a hand-held probe containing three sensors and a strap supported electronic module, weighs 4 1/2 kilograms. It is powered by flashlight and transistor radio batteries, utilizes two silicon and one lead sulfide detectors, has three liquid crystal displays, sample and hold radiometric sampling, and its spectral configuration corresponds to LANDSAT-D's thematic mapper bands. The device was designed to support thematic mapper ground-truth data collection efforts and to facilitate 'in situ' ground-based remote sensing studies of natural materials. Prototype instruments were extensively tested under laboratory and field conditions with excellent results.
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank
2017-04-01
By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.
Ground Truth Sampling and LANDSAT Accuracy Assessment
NASA Technical Reports Server (NTRS)
Robinson, J. W.; Gunther, F. J.; Campbell, W. J.
1982-01-01
It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.
NASA Technical Reports Server (NTRS)
Botkin, Daniel B.
1987-01-01
The analysis of ground-truth data from the boreal forest plots in the Superior National Forest, Minnesota, was completed. Development of statistical methods was completed for dimension analysis (equations to estimate the biomass of trees from measurements of diameter and height). The dimension-analysis equations were applied to the data obtained from ground-truth plots, to estimate the biomass. Classification and analyses of remote sensing images of the Superior National Forest were done as a test of the technique to determine forest biomass and ecological state by remote sensing. Data was archived on diskette and tape and transferred to UCSB to be used in subsequent research.
Machine processing of ERTS and ground truth data
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Peacock, K.
1973-01-01
The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.
1997-09-05
explosions be used as sources of ground truth information? Can these sources be used as surrogates for single-fired explosions in regions where no such... sources exist? (Stump) 3. Is there a single regional discriminant that will work for all mining explosions or will it be necessary to apply a suite of...the Treaty be used to take advantage of mining sources as Ground Truth information? Is it possible to use such information to "finger print" mines
Recommended data sets, corn segments and spring wheat segments, for use in program development
NASA Technical Reports Server (NTRS)
Austin, W. W. (Principal Investigator)
1981-01-01
The sets of Large Area Crop Inventory Experiment sites, crop year 1978, which are recommended for use in the development and evaluation of classification techniques based on LANDSAT spectral data are presented. For each site, the following exists: (1) accuracy assessment digitized ground truth; (2) a minimum of 5 percent of the scene ground truth identified as corn or spring wheat; and (3) at least four acquisitions of acceptable data quality during the growing season of the crop of interest. The recommended data sets consist of 41 corn/soybean sites and 17 spring wheat sites.
Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel
NASA Astrophysics Data System (ADS)
Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.
2013-12-01
We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the infrasonic phases of the two distant arrays; 2) a standard robust grid-search location procedure based on phase picks and a constant celerity for a phase (tropospheric or stratospheric) was applied; 3) a joint coordinate grid-search procedure using array waveforms and phase picks was tested, 4) the Bayesian Infrasonic Source Localization (BISL) method, incorporating semi-empirical model-based prior information, was modified for array+network configuration and applied to the ground-truth events. For this purpose we accumulated data of the former observations of the air-to-ground infrasonic phases to compute station specific ground-truth Celerity-Range Histograms (ssgtCRH) and/or model-based CRH (mbCRH), which allow to essentially improve the location results. For building the mbCRH the local meteo-data and the ray-tracing modeling in 3 available azimuth ranges, accounting seasonal variations of winds directivity (quadrants North:315-45, South: 135-225, East 45-135) have been used.
Science, Religion, and the Quest for Knowledge and Truth: An Islamic Perspective
ERIC Educational Resources Information Center
Guessoum, Nidhal
2010-01-01
This article consists of two parts. The first one is to a large extent a commentary on John R. Staver's "Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?" The second part is a related overview of Islam's philosophy of knowledge and, to a certain degree, science. In…
Mastin, Mark; Josberger, Edward
2014-01-01
Seasonally frozen ground occurs over approximately one‑third of the contiguous United States, causing increased winter runoff. Frozen ground generally rejects potential groundwater recharge. Nearly all recharge from precipitation in semi-arid regions such as the Columbia Plateau and the Snake River Plain in Idaho, Oregon, and Washington, occurs between October and March, when precipitation is most abundant and seasonally frozen ground is commonplace. The temporal and spatial distribution of frozen ground is expected to change as the climate warms. It is difficult to predict the distribution of frozen ground, however, because of the complex ways ground freezes and the way that snow cover thermally insulates soil, by keeping it frozen longer than it would be if it was not snow covered or, more commonly, keeping the soil thawed during freezing weather. A combination of satellite remote sensing and ground truth measurements was used with some success to investigate seasonally frozen ground at local to regional scales. The frozen-ground/snow-cover algorithm from the National Snow and Ice Data Center, combined with the 21-year record of passive microwave observations from the Special Sensor Microwave Imager onboard a Defense Meteorological Satellite Program satellite, provided a unique time series of frozen ground. Periodically repeating this methodology and analyzing for trends can be a means to monitor possible regional changes to frozen ground that could occur with a warming climate. The Precipitation-Runoff Modeling System watershed model constructed for the upper Crab Creek Basin in the Columbia Plateau and Reynolds Creek basin on the eastern side of the Snake River Plain simulated recharge and frozen ground for several future climate scenarios. Frozen ground was simulated with the Continuous Frozen Ground Index, which is influenced by air temperature and snow cover. Model simulation results showed a decreased occurrence of frozen ground that coincided with increased temperatures in the future climate scenarios. Snow cover decreased in the future climate scenarios coincident with the temperature increases. Although annual precipitation was greater in future climate scenarios, thereby increasing the amount of water available for recharge over current (baseline) simulations, actual evapotranspiration also increased and reduced the amount of water available for recharge over baseline simulations. The upper Crab Creek model shows no significant trend in the rates of recharge in future scenarios. In these scenarios, annual precipitation is greater than the baseline averages, offsetting the effects of greater evapotranspiration in future scenarios. In the Reynolds Creek Basin simulations, precipitation was held constant in future scenarios and recharge was reduced by 1.0 percent for simulations representing average conditions in 2040 and reduced by 4.3 percent for simulations representing average conditions in 2080. The focus of the results of future scenarios for the Reynolds Creek Basin was the spatial components of selected hydrologic variables for this 92 square mile mountainous basin with 3,600 feet of relief. Simulation results from the watershed model using the Continuous Frozen Ground Index provided a relative measure of change in frozen ground, but could not identify the within-soil processes that allow or reject available water to recharge aquifers. The model provided a means to estimate what might occur in the future under prescribed climate scenarios, but more detailed energy-balance models of frozen-ground hydrology are needed to accurately simulate recharge under seasonally frozen ground and provide a better understanding of how changes in climate may alter infiltration.
Evaluation of motion artifact metrics for coronary CT angiography.
Ma, Hongfeng; Gros, Eric; Szabo, Aniko; Baginski, Scott G; Laste, Zachary R; Kulkarni, Naveen M; Okerlund, Darin; Schmidt, Taly G
2018-02-01
This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best-phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground-truth motion artifact scores from a series of pairwise comparisons. Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low-Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine-filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground-truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground-truth reader score. The Kendall's Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. On phantom images, the Kendall's Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall's Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall's Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall's Tau coefficient of 0.65. The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Reference-free ground truth metric for metal artifact evaluation in CT images.
Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M
2011-07-01
In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
NASA Astrophysics Data System (ADS)
Trabant, Chad; Thurber, Clifford; Leith, William
2002-07-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km 2 (90% CE: 5.1 km 2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40° epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km 2 (90% CE: 438 km 2), and the other two covering 1730 and 8869 km 2 (90% CE: 1331 and 6822 km 2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km 2 for all events having more than three observations.
Consensus-Based Sorting of Neuronal Spike Waveforms.
Fournier, Julien; Mueller, Christian M; Shein-Idelson, Mark; Hemberger, Mike; Laurent, Gilles
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained "ground truth" data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data.
NASA Astrophysics Data System (ADS)
Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg
2017-03-01
Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.
Measuring Soil Moisture in Skeletal Soils Using a COSMOS Rover
NASA Astrophysics Data System (ADS)
Medina, C.; Neely, H.; Desilets, D.; Mohanty, B.; Moore, G. W.
2017-12-01
The presence of coarse fragments directly influences the volumetric water content of the soil. Current surface soil moisture sensors often do not account for the presence of coarse fragments, and little research has been done to calibrate these sensors under such conditions. The cosmic-ray soil moisture observation system (COSMOS) rover is a passive, non-invasive surface soil moisture sensor with a footprint greater than 100 m. Despite its potential, the COSMOS rover has yet to be validated in skeletal soils. The goal of this study was to validate measurements of surface soil moisture as taken by a COSMOS rover on a Texas skeletal soil. Data was collected for two soils, a Marfla clay loam and Chinati-Boracho-Berrend association, in West Texas. Three levels of data were collected: 1) COSMOS surveys at three different soil moistures, 2) electrical conductivity surveys within those COSMOS surveys, and 3) ground-truth measurements. Surveys with the COSMOS rover covered an 8000-h area and were taken both after large rain events (>2") and a long dry period. Within the COSMOS surveys, the EM38-MK2 was used to estimate the spatial distribution of coarse fragments in the soil around two COSMOS points. Ground truth measurements included coarse fragment mass and volume, bulk density, and water content at 3 locations within each EM38 survey. Ground-truth measurements were weighted using EM38 data, and COSMOS measurements were validated by their distance from the samples. There was a decrease in water content as the percent volume of coarse fragment increased. COSMOS estimations responded to both changes in coarse fragment percent volume and the ground-truth volumetric water content. Further research will focus on creating digital soil maps using landform data and water content estimations from the COSMOS rover.
Integrated Modeling, Mapping, and Simulation (IMMS) Framework for Exercise and Response Planning
NASA Technical Reports Server (NTRS)
Mapar, Jalal; Hoette, Trisha; Mahrous, Karim; Pancerella, Carmen M.; Plantenga, Todd; Yang, Christine; Yang, Lynn; Hopmeier, Michael
2011-01-01
EmergenCy management personnel at federal, stale, and local levels can benefit from the increased situational awareness and operational efficiency afforded by simulation and modeling for emergency preparedness, including planning, training and exercises. To support this goal, the Department of Homeland Security's Science & Technology Directorate is funding the Integrated Modeling, Mapping, and Simulation (IMMS) program to create an integrating framework that brings together diverse models for use by the emergency response community. SUMMIT, one piece of the IMMS program, is the initial software framework that connects users such as emergency planners and exercise developers with modeling resources, bridging the gap in expertise and technical skills between these two communities. SUMMIT was recently deployed to support exercise planning for National Level Exercise 2010. Threat, casualty. infrastructure, and medical surge models were combined within SUMMIT to estimate health care resource requirements for the exercise ground truth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.
Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside amore » given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may be caused by the biomechanical deformation process. Accuracy and stability of the model response were validated using ground-truth simulations representing soft tissue behavior under local and global deformations. Numerical accuracy of the HN deformations was analyzed by applying nonrigid skeletal transformations acquired from interfraction kVCT images to the model’s skeletal structures and comparing the subsequent soft tissue deformations of the model with the clinical anatomy. Results: The GPU based framework enabled the model deformation to be performed at 60 frames/s, facilitating simulations of posture changes and physiological regressions at interactive speeds. The soft tissue response was accurate with a R{sup 2} value of >0.98 when compared to ground-truth global and local force deformation analysis. The deformation of the HN anatomy by the model agreed with the clinically observed deformations with an average correlation coefficient of 0.956. For a clinically relevant range of posture and physiological changes, the model deformations stabilized with an uncertainty of less than 0.01 mm. Conclusions: Documenting dose delivery for HN radiotherapy is essential accounting for posture and physiological changes. The biomechanical model discussed in this paper was able to deform in real-time, allowing interactive simulations and visualization of such changes. The model would allow patient specific validations of the DIR method and has the potential to be a significant aid in adaptive radiotherapy techniques.« less
NASA Astrophysics Data System (ADS)
Robinson, D. Q.
2001-05-01
Hampton University, a historically black university, is leading the Education and Public Outreach (EPO) portion of the PICASSO-CENA satellite-based research mission. Currently scheduled for launch in 2004, PICASSO-CENA will use LIDAR (LIght Detection and Ranging), to study earth's atmosphere. The PICASSO-CENA Outreach program works with scientists, teachers, and students to better understand the effects of clouds and aerosols on earth's atmosphere. This program actively involves students nationwide in NASA research by having them obtain sun photometer measurements from their schools and homes for comparison with data collected by the PICASSO-CENA mission. Students collect data from their classroom ground observations and report the data via the Internet. Scientists will use the data from the PICASSO-CENA research and the student ground-truthing observations to improve predications about climatic change. The two-band passive remote sensing sun photometer is designed for student use as a stand alone instrument to study atmospheric turbidity or in conjunction with satellite data to provide ground-truthing. The instrument will collect measurements of column optical depth from the ground level. These measurements will not only give the students an appreciation for atmospheric turbidity, but will also provide quantitative correlative information to the PICASSO-CENA mission on ground-level optical depth. Student data obtained in this manner will be sufficiently accurate for scientists to use as ground truthing. Thus, students will have the opportunity to be involved with a NASA satellite-based research mission.
Causal discovery in the geosciences-Using synthetic data to learn how to interpret results
NASA Astrophysics Data System (ADS)
Ebert-Uphoff, Imme; Deng, Yi
2017-02-01
Causal discovery algorithms based on probabilistic graphical models have recently emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, thus finding pathways of interactions in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. The lack of ground truth is a typical problem when using causal discovery in the geosciences. Furthermore, while most of the connections found by this method match domain knowledge, we encountered one type of connection for which no explanation was found. To address both of these issues we developed a simulation framework that generates synthetic data of typical atmospheric processes (advection and diffusion). Applying the causal discovery algorithm to the synthetic data allowed us (1) to develop a better understanding of how these physical processes appear in the resulting connectivity graphs, and thus how to better interpret such connectivity graphs when obtained from real-world data; (2) to solve the mystery of the previously unexplained connections.
Beyond illusion: Psychoanalysis and the question of religious truth.
Blass, Rachel B
2004-06-01
In this paper the author critically examines the nature of the positive, reconciliatory attitude towards religion that has become increasingly prevalent within psychoanalytic thinking and writing over the past 20 years. She shows how this positive attitude rests on a change in the nature of the prototype of religion and its reassignment to the realm of illusion, thus making irrelevant an issue most central both to psychoanalysis and to traditional Judeo-Christian belief--the passionate search for truth. The author demonstrates how the concern with truth, and specifically with the truth of religious claims, lies at the basis of the opposition between psychoanalysis and religion but, paradoxically, also provides the common ground for dialogue between the two. She argues that, as Freud developed his ideas regarding the origin of conviction in religious claims in his Moses and monotheism (1939), the nature of this common ground was expanded and the dialogue became potentially more meaningful. The author concludes that meaningful dialogue emerges through recognition of fundamental differences rather than through harmonisation within a realm of illusion. In this light, the present study may also be seen as an attempt to recognise fundamental differences that have been evolving within psychoanalysis itself.
Ground-Based Remote Sensing of Water-Stressed Crops: Thermal and Multispectral Imaging
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truthing for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted ...
Ground-based thermal and multispectral imaging of limited irrigation crops
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truth for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted in ...
NASA Astrophysics Data System (ADS)
Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin
2015-03-01
Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.
NASA Technical Reports Server (NTRS)
Dixon, C. M.
1981-01-01
Land cover information derived from LANDSAT is being utilized by Piedmont Planning District Commission located in the State of Virginia. Progress to date is reported on a level one land cover classification map being produced with nine categories. The nine categories of classification are defined. The computer compatible tape selection is presented. Two unsupervised classifications were done, with 50 and 70 classes respectively. Twenty-eight spectral classes were developed using the supervised technique, employing actual ground truth training sites. The accuracy of the unsupervised classifications are estimated through comparison with local county statistics and with an actual pixel count of LANDSAT information compared to ground truth.
Sweet-spot training for early esophageal cancer detection
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.
2016-03-01
Over the past decade, the imaging tools for endoscopists have improved drastically. This has enabled physicians to visually inspect the intestinal tissue for early signs of malignant lesions. Besides this, recent studies show the feasibility of supportive image analysis for endoscopists, but the analysis problem is typically approached as a segmentation task where binary ground truth is employed. In this study, we show that the detection of early cancerous tissue in the gastrointestinal tract cannot be approached as a binary segmentation problem and it is crucial and clinically relevant to involve multiple experts for annotating early lesions. By employing the so-called sweet spot for training purposes as a metric, a much better detection performance can be achieved. Furthermore, a multi-expert-based ground truth, i.e. a golden standard, enables an improved validation of the resulting delineations. For this purpose, besides the sweet spot we also propose another novel metric, the Jaccard Golden Standard (JIGS) that can handle multiple ground-truth annotations. Our experiments involving these new metrics and based on the golden standard show that the performance of a detection algorithm of early neoplastic lesions in Barrett's esophagus can be increased significantly, demonstrating a 10 percent point increase in the resulting F1 detection score.
Truth and opinion in climate change discourse: the Gore-Hansen disagreement.
Russill, Chris
2011-11-01
In this paper, I discuss the "inconvenient truth" strategy of Al Gore. I argue that Gore's notion of truth upholds a conception of science and policy that narrows our understanding of climate change discourse. In one notable exchange, Gore and NASA scientist, James Hansen, disagreed about whether scientific statements based on Hansen's computer simulations were truth or opinion. This exchange is featured in An Inconvenient Truth, yet the disagreement is edited from the film and presented simply as an instance of Hansen speaking "inconvenient truth". In this article, I compare the filmic representation of Hansen's testimony with the congressional record. I place their exchange in a broader historical perspective on climate change disputation in order to discuss the implications of Gore's perspective on truth.
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Collaboration and Conflict Resolution in Education.
ERIC Educational Resources Information Center
Melamed, James C.; Reiman, John W.
2000-01-01
Presents guidelines for resolving conflicts between educators and parents. Participants should seek different perspectives, not "truths," consider the common ground, define an effective problem-solving procedure, adopt ground rules for discussion, address issues, identify interests and positive intentions, develop options, select arrangements, and…
Assessment of COTS IR image simulation tools for ATR development
NASA Astrophysics Data System (ADS)
Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal
2005-05-01
Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G; Winey, B
Purpose: An unpredictable movement of a patient can occur during SBRT even when immobilization devices are applied. In the SBRT treatments using a conventional linear accelerator detection of such movements relies heavily on human interaction and monitoring. This study aims to detect such positional abnormalities in real-time by assessing intra-fractional gantry mounted kV projection images of a patient’s spine. Methods: We propose a self-CBCT image based spine tracking method consisting of the following steps: (1)Acquire a pre-treatment CBCT image; (2)Transform the CBCT volume according to the couch correction; (3)Acquire kV projections during treatment beam delivery; (4)Simultaneously with each acquisition generatemore » a DRR from the CBCT volume based-on the current projection geometry; (5)Perform an intensity gradient-based 2D registration between spine ROI images of the projection and the DRR images; (6)Report an alarm if the detected 2D displacement is beyond a threshold value. To demonstrate the feasibility, retrospective simulations were performed on 1,896 projections from nine CBCT sessions of three patients who received lung SBRT. The unpredictable movements were simulated by applying random rotations and translations to the reference CBCT prior to each DRR generation. As the ground truth, the 3D translations and/or rotations causing >3 mm displacement of the midpoint of the thoracic spine were regarded as abnormal. In the measurements, different threshold values of 2D displacement were tested to investigate sensitivity and specificity of the proposed method. Results: A linear relationship between the ground truth 3D displacement and the detected 2D displacement was observed (R{sup 2} = 0.44). When the 2D displacement threshold was set to 3.6 mm the overall sensitivity and specificity were 77.7±5.7% and 77.9±3.5% respectively. Conclusion: In this simulation study, it was demonstrated that intrafractional kV projections from an on-board CBCT system have a potential to detect unpredictable patient movement during SBRT. This research is funded by Interfractional Imaging Research Grant from Elekta.« less
NASA Astrophysics Data System (ADS)
Park, Junghyun; Hayward, Chris; Stump, Brian W.
2018-06-01
Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.
Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing
NASA Astrophysics Data System (ADS)
Meng, X.
2012-07-01
Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.
Real-time upper-body human pose estimation from depth data using Kalman filter for simulator
NASA Astrophysics Data System (ADS)
Lee, D.; Chi, S.; Park, C.; Yoon, H.; Kim, J.; Park, C. H.
2014-08-01
Recently, many studies show that an indoor horse riding exercise has a positive effect on promoting health and diet. However, if a rider has an incorrect posture, it will be the cause of back pain. In spite of this problem, there is only few research on analyzing rider's posture. Therefore, the purpose of this study is to estimate a rider pose from a depth image using the Asus's Xtion sensor in real time. In the experiments, we show the performance of our pose estimation algorithm in order to comparing the results between our joint estimation algorithm and ground truth data.
Phase unwrapping in three dimensions with application to InSAR time series.
Hooper, Andrew; Zebker, Howard A
2007-09-01
The problem of phase unwrapping in two dimensions has been studied extensively in the past two decades, but the three-dimensional (3D) problem has so far received relatively little attention. We develop here a theoretical framework for 3D phase unwrapping and also describe two algorithms for implementation, both of which can be applied to synthetic aperture radar interferometry (InSAR) time series. We test the algorithms on simulated data and find both give more accurate results than a two-dimensional algorithm. When applied to actual InSAR time series, we find good agreement both between the algorithms and with ground truth.
Development of Autonomous Aerobraking (Phase 1)
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Powell, Richard W.; Prince, Jill L.
2012-01-01
The NASA Engineering and Safety Center received a request from Mr. Daniel Murri (NASA Technical Fellow for Flight Mechanics) to develop an autonomous aerobraking capability. An initial evaluation for all phases of this assessment was approved to proceed at the NESC Review Board meeting. The purpose of phase 1 of this study was to provide an assessment of the feasibility of autonomous aerobraking. During this phase, atmospheric, aerodynamic, and thermal models for a representative spacecraft were developed for both the onboard algorithm known as Autonomous Aerobraking Development Software, and a ground-based "truth" simulation developed for testing purposes. The results of the phase 1 assessment are included in this report.
Topological Distances Between Brain Networks
Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.
2018-01-01
Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Bulk Growth of 2-6 Crystals in the Microgravity Environment of USML-1
NASA Technical Reports Server (NTRS)
Gillies, Donald C.; Lehoczky, Sandor L.; Szofran, Frank R.; Larson, David J.; Su, Ching-Hua; Sha, Yi-Gao; Alexander, Helga A.
1993-01-01
The first United States Microgravity Laboratory Mission (USML- 1) flew in June 1992 on the Space Shuttle Columbia. An important part of this SpaceLab mission was the debut of the Crystal Growth Furnace (CGF). Of the seven samples grown in the furnace, three were bulk grown 2-6 compounds, two of a cadmium zinc telluride alloy, and one of a mercury zinc telluride alloy. Ground based results are presented, together with the results of computer simulated growths of these experimental conditions. Preliminary characterization results for the three USML-1 growth runs are also presented and the flight sample characteristics are compared to the equivalent ground truth samples. Of particular interest are the effect of the containment vessel on surface features, and especially on the nucleation, and the effect of the gravity vector on radial and axial compositional variations and stress and defect levels.
Exploitation of ERTS-1 imagery utilizing snow enhancement techniques
NASA Technical Reports Server (NTRS)
Wobber, F. J.; Martin, K. R.
1973-01-01
Photogeological analysis of ERTS-simulation and ERTS-1 imagery of snowcovered terrain within the ERAP Feather River site and within the New England (ERTS) test area provided new fracture detail which does not appear on available geological maps. Comparative analysis of snowfree ERTS-1 images has demonstrated that MSS Bands 5 and 7 supply the greatest amount of geological fracture detail. Interpretation of the first snow-covered ERTS-1 images in correlation with ground snow depth data indicates that a heavy blanket of snow (more than 9 inches) accentuates major structural features while a light "dusting", (less than 1 inch) accentuates more subtle topographic expressions. An effective mail-based method for acquiring timely ground-truth (snowdepth) information was established and provides a ready correlation of fracture detail with snow depth so as to establish the working limits of the technique. The method is both efficient and inexpensive compared with the cost of similarly scaled direct field observations.
Measuring Small Debris - What You Can't See Can Hurt You
NASA Technical Reports Server (NTRS)
Matney, Mark
2016-01-01
While modeling gives us a tool to better understand the Earth orbit debris environment, it is measurements that give us "ground truth" about what is happening in space. Assets that can detect orbital debris remotely from the surface of the Earth, such as radars and telescopes, give us a statistical view of how debris are distributed in space, how they are being created, and how they are evolving over time. In addition, in situ detectors in space are giving us a better picture of how the small particle environment is actually damaging spacecraft today. IN addition, simulation experiments on the ground help us to understand what we are seeing in orbit. This talk will summarize the history of space debris measurements, how it has changed our view of the Earth orbit environment, and how we are designing the experiments of tomorrow.
NASA Astrophysics Data System (ADS)
Köhler, P.; Huth, A.
2010-05-01
The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI). The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. It is found that for undisturbed forest and a variety of disturbed forests situations AGB can be expressed as a power-law function of canopy height h (AGB=a·hb) with an r2~60% for a spatial resolution of 20 m×20 m (0.04 ha, also called plot size). The regression is becoming significant better for the hectare wide analysis of the disturbed forest sites (r2=91%). There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2~60%) between AGB and the area fraction in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques have the potential to quantify the carbon contained in the vegetation, although this calculation contains due to the heterogeneity of the forest landscape structural uncertainties which restrict future applications to spatial averages of about one hectare in size. The uncertainties in AGB for a given canopy height are here 20-40% (95% confidence level) corresponding to a standard deviation of less than ±10%. This uncertainty on the 1 ha-scale is much smaller than in the analysis of 0.04 ha-scale data. At this small scale (0.04 ha) AGB can only be calculated out of canopy height with an uncertainty which is at least of the magnitude of the signal itself due to the natural spatial heterogeneity of these forests.
Traverse velocity maps for human exploration
NASA Astrophysics Data System (ADS)
Heinicke, Christiane; Johnston, Carmel; Sefton-Nash, Elliot; Foing, Bernard
2017-04-01
It is often proposed that humans are more effective and efficient in conducting exploratory work during planetary missions than rovers. However, even humans are hindered by the restrictions of their suits and by necessary precautions to ensure the astronauts' safety. During the 12-month simulation at the Hawaii Space Exploration Analog and Simulation facility, several members of the six-person crew conducted a large number of exploratory expeditions under conditions similar to a Mars crew. Over the course of 145 extra-vehicular activities (EVAs), they traversed several thousand kilometers of various types of terrain. The actual walking speeds of the crew members have been correlated with different properties of the terrain as determined from field excursions and remote sensing. The resulting terrain and velocity maps can be used both for ground truthing of satellite imagery, and potential EVA planning on celestial bodies.
Pest measurement and management
USDA-ARS?s Scientific Manuscript database
Pest scouting, whether it is done only with ground scouting methods or using remote sensing with some ground-truthing, is an important tool to aid site-specific crop management. Different pests may be monitored at different times and using different methods. Remote sensing has the potential to provi...
Ground Truth Studies - A hands-on environmental science program for students, grades K-12
NASA Technical Reports Server (NTRS)
Katzenberger, John; Chappell, Charles R.
1992-01-01
The paper discusses the background and the objectives of the Ground Truth Studies (GTSs), an activity-based teaching program which integrates local environmental studies with global change topics, utilizing remotely sensed earth imagery. Special attention is given to the five key concepts around which the GTS programs are organized, the pilot program, the initial pilot study evaluation, and the GTS Handbook. The GTS Handbook contains a primer on global change and remote sensing, aerial and satellite images, student activities, glossary, and an appendix of reference material. Also described is a K-12 teacher training model. International participation in the program is to be initiated during the 1992-1993 school year.
Land use and land cover mapping: City of Palm Bay, Florida
NASA Technical Reports Server (NTRS)
Barile, D. D.; Pierce, R.
1977-01-01
Two different computer systems were compared for use in making land use and land cover maps. The Honeywell 635 with the LANDSAT signature development program (LSDP) produced a map depicting general patterns, but themes were difficult to classify as specific land use. Urban areas were unclassified. The General Electric Image 100 produced a map depicting eight land cover categories classifying 68 percent of the total area. Ground truth, LSDP, and Image 100 maps were all made to the same scale for comparison. LSDP agreed with the ground truth 60 percent and 64 percent within the two test areas compared and Image 100 was in agreement 70 percent and 80 percent.
Erosional and depositional history of central Chryse Planitia
NASA Technical Reports Server (NTRS)
Crumpler, L. S.
1992-01-01
This map uses high resolution image data to assess the detailed depositional and erosional history of part of Chryse Planitia. This area is significant to the study of the global geology of Mars because it represents one of only two areas on the martian surface where planetary geologic mapping is assisted with 'ground truth.' In this case the ground truth was provided by Viking Lander 1. Additional questions addressed in this study are concerned with the following: the geologic context of the regional plains surface and the local surface of the Viking Lander 1 site; and the relative influence of volcanic, sedimentary, impact, aeolian, and tectonic processes at the regional and local scales.
NASA Astrophysics Data System (ADS)
Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki
We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
The Kenya rangeland ecological monitoring unit
NASA Technical Reports Server (NTRS)
Stevens, W. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Methodology for aerial surveys and ground truth studies was developed, tested, and revised several times to produce reasonably firm methods of procedure. Computer programs were adapted or developed to analyze, store, and recall data from the ground and air monitoring surveys.
NASA Astrophysics Data System (ADS)
Davies, Jaime S.; Howell, Kerry L.; Stewart, Heather A.; Guinan, Janine; Golding, Neil
2014-06-01
In 2007, the upper part of a submarine canyon system located in water depths between 138 and 1165 m in the South West (SW) Approaches (North East Atlantic Ocean) was surveyed over a 2 week period. High-resolution multibeam echosounder data covering 1106 km2, and 44 ground-truthing video and image transects were acquired to characterise the biological assemblages of the canyons. The SW Approaches is an area of complex terrain, and intensive ground-truthing revealed the canyons to be dominated by soft sediment assemblages. A combination of multivariate analysis of seabed photographs (184-1059 m) and visual assessment of video ground-truthing identified 12 megabenthic assemblages (biotopes) at an appropriate scale to act as mapping units. Of these biotopes, 5 adhered to current definitions of habitats of conservation concern, 4 of which were classed as Vulnerable Marine Ecosystems. Some of the biotopes correspond to descriptions of communities from other megahabitat features (for example the continental shelf and seamounts), although it appears that the canyons host modified versions, possibly due to the inferred high rates of sedimentation in the canyons. Other biotopes described appear to be unique to canyon features, particularly the sea pen biotope consisting of Kophobelemnon stelliferum and cerianthids.
Spatio-temporal evaluation of plant height in corn via unmanned aerial systems
NASA Astrophysics Data System (ADS)
Varela, Sebastian; Assefa, Yared; Vara Prasad, P. V.; Peralta, Nahuel R.; Griffin, Terry W.; Sharda, Ajay; Ferguson, Allison; Ciampitti, Ignacio A.
2017-07-01
Detailed spatial and temporal data on plant growth are critical to guide crop management. Conventional methods to determine field plant traits are intensive, time-consuming, expensive, and limited to small areas. The objective of this study was to examine the integration of data collected via unmanned aerial systems (UAS) at critical corn (Zea mays L.) developmental stages for plant height and its relation to plant biomass. The main steps followed in this research were (1) workflow development for an ultrahigh resolution crop surface model (CSM) with the goal of determining plant height (CSM-estimated plant height) using data gathered from the UAS missions; (2) validation of CSM-estimated plant height with ground-truthing plant height (measured plant height); and (3) final estimation of plant biomass via integration of CSM-estimated plant height with ground-truthing stem diameter data. Results indicated a correlation between CSM-estimated plant height and ground-truthing plant height data at two weeks prior to flowering and at flowering stage, but high predictability at the later growth stage. Log-log analysis on the temporal data confirmed that these relationships are stable, presenting equal slopes for both crop stages evaluated. Concluding, data collected from low-altitude and with a low-cost sensor could be useful in estimating plant height.
NASA Technical Reports Server (NTRS)
Wright, F. F. (Principal Investigator); Sharma, G. D.; Burns, J. J.
1973-01-01
The author has identified the following significant results. Even though nonsynchronous, the ERTS-1 imagery of November 4, 1972, showed a striking similarity to the ground truth data obtained in late August and September, 1972. The comparison of the images with ground truth data revealed that the general water circulation pattern in Lower Cook Inlet is consistent through the Fall season and that ERTS-1 images in MSS bands 4 and 5 are capable of delineating water masses with a suspended load as low as 1 mg/liter. The ERTS-1 data and the ground truth data demonstrate clearly that the coriolis effect dominates circulation in Lower Cook Inlet. The configuration of plumes in Nushagak and Kuskokwim bays further indicates the influence of the coriolis effect on the movement of sea water at high latitudes. Comparison of MSS bands 4, 5, 6, and 7 suggest MSS-1 penetration of several meters into the water column. Sea ice analysis of available imagery was exceptionally rewarding. The imagery provided a rapid method to delineate and describe the ice types apparent in the photos. The ice types ranged from newly formed grease ice to heavy flows of disintegrating shore-fast ice. Sea ice maps showing the extent of different ice zones in the Chukchi Sea are being compiled.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
Rice Crop Monitoring Using Microwave and Optical Remotely Sensed Image Data
NASA Astrophysics Data System (ADS)
Suga, Y.; Konishi, T.; Takeuchi, S.; Kitano, Y.; Ito, S.
Hiroshima Institute of Technology HIT is operating the direct down-links of microwave and optical satellite data in Japan This study focuses on the validation for rice crop monitoring using microwave and optical remotely sensed image data acquired by satellites referring to ground truth data such as height of crop ratio of crop vegetation cover and leaf area index in the test sites of Japan ENVISAT-1 ASAR data has a capability to capture regularly and to monitor during the rice growing cycle by alternating cross polarization mode images However ASAR data is influenced by several parameters such as landcover structure direction and alignment of rice crop fields in the test sites In this study the validation was carried out combined with microwave and optical satellite image data and ground truth data regarding rice crop fields to investigate the above parameters Multi-temporal multi-direction descending and ascending and multi-angle ASAR alternating cross polarization mode images were used to investigate rice crop growing cycle LANDSAT data were used to detect landcover structure direction and alignment of rice crop fields corresponding to the backscatter of ASAR As the result of this study it was indicated that rice crop growth can be precisely monitored using multiple remotely sensed data and ground truth data considering with spatial spectral temporal and radiometric resolutions
Predicted seafloor facies of Central Santa Monica Bay, California
Dartnell, Peter; Gardner, James V.
2004-01-01
Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.
Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip
Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312
The simulators: truth and power in the psychiatry of José Ingenieros.
Caponi, Sandra
2016-01-01
Using Michel Foucault's lectures on "Psychiatric power" as its starting point, this article analyzes the book Simulación de la locura (The simulation of madness), published in 1903 by the Argentine psychiatrist José Ingenieros. Foucault argues that the problem of simulation permeates the entire history of modern psychiatry. After initial analysis of José Ingenieros's references to the question of simulation in the struggle for existence, the issue of simulation in pathological states in general is examined, and lastly the simulation of madness and the problem of degeneration. Ingenieros participates in the epistemological and political struggle that took place between experts-psychiatrists and simulators over the question of truth.
NASA Astrophysics Data System (ADS)
Manousaki, D.; Panagiotopoulou, A.; Bizimi, V.; Haynes, M. S.; Love, S.; Kallergi, M.
2017-11-01
The purpose of this study was the generation of ground truth files (GTFs) of the breast ducts from 3D images of the Invenia™ Automated Breast Ultrasound System (ABUS) system (GE Healthcare, Little Chalfont, UK) and the application of these GTFs for the optimization of the imaging protocol and the evaluation of a computer aided detection (CADe) algorithm developed for automated duct detection. Six lactating, nursing volunteers were scanned with the ABUS before and right after breastfeeding their infants. An expert in breast ultrasound generated rough outlines of the milk-filled ducts in the transaxial slices of all image volumes and the final GTFs were created by using thresholding and smoothing tools in ImageJ. In addition, a CADe algorithm automatically segmented duct like areas and its results were compared to the expert’s GTFs by estimating true positive fraction (TPF) or % overlap. The CADe output differed significantly from the expert’s but both detected a smaller than expected volume of the ducts due to insufficient contrast (ducts were partially filled with milk), discontinuities, and artifacts. GTFs were used to modify the imaging protocol and improve the CADe method. In conclusion, electronic GTFs provide a valuable tool in the optimization of a tomographic imaging system, the imaging protocol, and the CADe algorithms. Their generation, however, is an extremely time consuming, strenuous process, particularly for multi-slice examinations, and alternatives based on phantoms or simulations are highly desirable.
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
Trabant, C.; Thurber, C.; Leith, W.
2002-01-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Biondo, Manuela; Bartholomä, Alexander
2017-04-01
One of the burning issues on the topic of acoustic seabed classification is the lack of solid, repeatable, statistical procedures that can support the verification of acoustic variability in relation to seabed properties. Acoustic sediment classification schemes often lead to biased and subjective interpretation, as they ultimately aim at an oversimplified categorization of the seabed based on conventionally defined sediment types. However, grain size variability alone cannot be accounted for acoustic diversity, which will be ultimately affected by multiple physical processes, scale of heterogeneity, instrument settings, data quality, image processing and segmentation performances. Understanding and assessing the weight of all of these factors on backscatter is a difficult task, due to the spatially limited and fragmentary knowledge of the seabed from of direct observations (e.g. grab samples, cores, videos). In particular, large-scale mapping requires an enormous availability of ground-truthing data that is often obtained from heterogeneous and multidisciplinary sources, resulting into a further chance of misclassification. Independently from all of these limitations, acoustic segments still contain signals for seabed changes that, if appropriate procedures are established, can be translated into meaningful knowledge. In this study we design a simple, repeatable method, based on multivariate procedures, with the scope to classify a 100 km2, high-frequency (450 kHz) sidescan sonar mosaic acquired in the year 2012 in the shallow upper-mesotidal inlet of the Jade Bay (German North Sea coast). The tool used for the automated classification of the backscatter mosaic is the QTC SWATHVIEWTMsoftware. The ground-truthing database included grab sample data from multiple sources (2009-2011). The method was designed to extrapolate quantitative descriptors for acoustic backscatter and model their spatial changes in relation to grain size distribution and morphology. The modelled relationships were used to: 1) asses the automated segmentation performance, 2) obtain a ranking of most discriminant seabed attributes responsible for acoustic diversity, 3) select the best-fit ground-truthing information to characterize each acoustic class. Using a supervised Linear Discriminant Analysis (LDA), relationships between seabed parameters and acoustic classes discrimination were modelled, and acoustic classes for each data point were predicted. The model predicted a success rate of 63.5%. An unsupervised LDA was used to model relationships between acoustic variables and clustered seabed categories with the scope of identifying misrepresentative ground-truthing data points. The model prediction scored a success rate of 50.8%. Misclassified data points were disregarded for final classification. Analyses led to clearer, more accurate appreciation of relationship patterns and improved understanding of site-specific processes affecting the acoustic signal. Value to the qualitative classification output was added by comparing the latter with a more recent set of acoustic and ground-truthing information (2014). Classification resulted in the first acoustic sediment map ever produced in the area and offered valuable knowledge for detailed sediment variability. The method proved to be a simple, repeatable strategy that may be applied to similar work and environments.
The NRL 2011 Airborne Sea-Ice Thickness Campaign
NASA Astrophysics Data System (ADS)
Brozena, J. M.; Gardner, J. M.; Liang, R.; Ball, D.; Richter-Menge, J.
2011-12-01
In March of 2011, the US Naval Research Laboratory (NRL) performed a study focused on the estimation of sea-ice thickness from airborne radar, laser and photogrammetric sensors. The study was funded by ONR to take advantage of the Navy's ICEX2011 ice-camp /submarine exercise, and to serve as a lead-in year for NRL's five year basic research program on the measurement and modeling of sea-ice scheduled to take place from 2012-2017. Researchers from the Army Cold Regions Research and Engineering Laboratory (CRREL) and NRL worked with the Navy Arctic Submarine Lab (ASL) to emplace a 9 km-long ground-truth line near the ice-camp (see Richter-Menge et al., this session) along which ice and snow thickness were directly measured. Additionally, US Navy submarines collected ice draft measurements under the groundtruth line. Repeat passes directly over the ground-truth line were flown and a grid surrounding the line was also flown to collect altimeter, LiDAR and Photogrammetry data. Five CRYOSAT-2 satellite tracks were underflown, as well, coincident with satellite passage. Estimates of sea ice thickness are calculated assuming local hydrostatic balance, and require the densities of water, ice and snow, snow depth, and freeboard (defined as the elevation of sea ice, plus accumulated snow, above local sea level). Snow thickness is estimated from the difference between LiDAR and radar altimeter profiles, the latter of which is assumed to penetrate any snow cover. The concepts we used to estimate ice thickness are similar to those employed in NASA ICEBRIDGE sea-ice thickness estimation. Airborne sensors used for our experiment were a Reigl Q-560 scanning topographic LiDAR, a pulse-limited (2 nS), 10 GHz radar altimeter and an Applanix DSS-439 digital photogrammetric camera (for lead identification). Flights were conducted on a Twin Otter aircraft from Pt. Barrow, AK, and averaged ~ 5 hours in duration. It is challenging to directly compare results from the swath LiDAR with the pulse-limited radar altimeter that has a footprint that varies from a few meters to a few tens of meters depending on altitude and roughness of the reflective surface. Intercalibration of the two instruments was accomplished at leads in the ice and by multiple over-flights of four radar corner-cubes set ~ 2 m above the snow along the ground-truth line. Direct comparison of successive flights of the ground-truth line to flights done in a grid pattern over and adjacent to the line was complicated by the ~ 20-30 m drift of the ice-floe between successive flight-lines. This rapid ice movement required the laser and radar data be translated into an ice-fixed, rather than a geographic reference frame. This was facilitated by geodetic GPS receiver measurements at the ice-camp and Pt. Barrow. The NRL data set, in combination with the ground-truth line and submarine upward-looking sonar data, will aid in understanding the error budgets of our systems, the ICEBRIDGE airborne measurements (also flown over the ground-truth line), and the CRYOSAT-2 data over a wide range of ice types.
ALP FOPEN Site Description and Ground Truth Summary
1990-02-01
equations describing the destribution of above ground biomass for the various tree species; and (6) dielectric measurements of the two major’ tree...does not physically alter the tree layer being sampled by pressing too hard with the dielectric probe. In design of an experiment to collect dielectric
Naval sensor data database (NSDD)
NASA Astrophysics Data System (ADS)
Robertson, Candace J.; Tubridy, Lisa H.
1999-08-01
The Naval Sensor Data database (NSDD) is a multi-year effort to archive, catalogue, and disseminate data from all types of sensors to the mine warfare, signal and image processing, and sensor development communities. The purpose is to improve and accelerate research and technology. Providing performers with the data required to develop and validate improvements in hardware, simulation, and processing will foster advances in sensor and system performance. The NSDD will provide a centralized source of sensor data in its associated ground truth, which will support an improved understanding will be benefited in the areas of signal processing, computer-aided detection and classification, data compression, data fusion, and geo-referencing, as well as sensor and sensor system design.
Quantitative Morphology Measures in Galaxies: Ground-Truthing from Simulations
NASA Astrophysics Data System (ADS)
Narayanan, Desika T.; Abruzzo, Matthew W.; Dave, Romeel; Thompson, Robert
2017-01-01
The process of galaxy assembly is a prevalent question in astronomy; there are a variety of potentially important effects, including baryonic accretion from the intergalactic medium, as well as major galaxy mergers. Recent years have ushered in the development of quantitative measures of morphology such as the Gini coefficient (G), the second-order moment of the brightest quintile of a galaxy’s light (M20), and the concentration (C), asymmetry (A), and clumpiness (S) of galaxies. To investigate the efficacy of these observational methods at identifying major mergers, we have run a series of very high resolution cosmological zoom simulations, and coupled these with 3D Monte Carlo dust radiative transfer. Our methodology is powerful in that it allows us to “observe” the simulation as an observer would, while maintaining detailed knowledge of the true merger history of the galaxy. In this presentation, we will present our main results from our analysis of these quantitative morphology measures, with a particular focus on high-redshift (z>2) systems.
Evaluating segmentation error without ground truth.
Kohlberger, Timo; Singh, Vivek; Alvino, Chris; Bahlmann, Claus; Grady, Leo
2012-01-01
The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.
Generating high precision ionospheric ground-truth measurements
NASA Technical Reports Server (NTRS)
Komjathy, Attila (Inventor); Sparks, Lawrence (Inventor); Mannucci, Anthony J. (Inventor)
2007-01-01
A method, apparatus and article of manufacture provide ionospheric ground-truth measurements for use in a wide-area augmentation system (WAAS). Ionospheric pseudorange/code and carrier phase data as primary observables is received by a WAAS receiver. A polynomial fit is performed on the phase data that is examined to identify any cycle slips in the phase data. The phase data is then leveled. Satellite and receiver biases are obtained and applied to the leveled phase data to obtain unbiased phase-leveled ionospheric measurements that are used in a WAAS system. In addition, one of several measurements may be selected and data is output that provides information on the quality of the measurements that are used to determine corrective messages as part of the WAAS system.
Classifying unresolved objects from simulated space data.
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Hyde, P. D.
1973-01-01
A multispectral scanner data set gathered at a flight altitude of 10,000 ft. over an agricultural area was modified to simulate the spatial resolution of the spacecraft scanners. Signatures were obtained for several major crops and their proportions were estimated over a large area. For each crop, a map was generated to show its approximate proportion in each resolution element, and hence its distribution over the area of interest. A statistical criterion was developed to identify data points that may not represent a mixture of the specified crops. This allows for great reduction in the effect of unknown or alien objects on the estimated proportions. This criterion can be used to locate special features, such as roads or farm houses. Preliminary analysis indicates a high level of consistency between estimated proportions and available ground truth. Large concentrations of major crops show up especially well on the maps.
Analysis of thematic mapper simulator data collected over eastern North Dakota
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1982-01-01
The results of the analysis of aircraft-acquired thematic mapper simulator (TMS) data, collected to investigate the utility of thematic mapper data in crop area and land cover estimates, are discussed. Results of the analysis indicate that the seven-channel TMS data are capable of delineating the 13 crop types included in the study to an overall pixel classification accuracy of 80.97% correct, with relative efficiencies for four crop types examined between 1.62 and 26.61. Both supervised and unsupervised spectral signature development techniques were evaluated. The unsupervised methods proved to be inferior (based on analysis of variance) for the majority of crop types considered. Given the ground truth data set used for spectral signature development as well as evaluation of performance, it is possible to demonstrate which signature development technique would produce the highest percent correct classification for each crop type.
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
NASA Technical Reports Server (NTRS)
Rust, W. D.; Macgorman, D. R.; Taylor, W.; Arnold, R. T.
1984-01-01
Severe storms and lightning were measured with a NASA U2 and ground based facilities, both fixed base and mobile. Aspects of this program are reported. The following results are presented: (1) ground truth measurements of lightning for comparison with those obtained by the U2. These measurements include flash type identification, electric field changes, optical waveforms, and ground strike location; (2) simultaneous extremely low frequency (ELF) waveforms for cloud to ground (CG) flashes; (3) the CG strike location system (LLP) using a combination of mobile laboratory and television video data are assessed; (4) continued development of analog-to-digital conversion techniques for processing lightning data from the U2, mobile laboratory, and NSSL sensors; (5) completion of an all azimuth TV system for CG ground truth; (6) a preliminary analysis of both IC and CG lightning in a mesocyclone; and (7) the finding of a bimodal peak in altitude lightning activity in some storms in the Great Plains and on the east coast. In the forms on the Great Plains, there was a distinct class of flash what forms the upper mode of the distribution. These flashes are smaller horizontal extent, but occur more frequently than flashes in the lower mode of the distribution.
Framework for modeling urban restoration resilience time in the aftermath of an extreme event
Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Héctor
2015-01-01
The impacts of extreme events continue long after the emergency response has terminated. Effective reconstruction of supply-chain strategic infrastructure (SCSI) elements is essential for postevent recovery and the reconnectivity of a region with the outside. This study uses an interdisciplinary approach to develop a comprehensive framework to model resilience time. The framework is tested by comparing resilience time results for a simulated EF-5 tornado with ground truth data from the tornado that devastated Joplin, Missouri, on May 22, 2011. Data for the simulated tornado were derived for Overland Park, Johnson County, Kansas, in the greater Kansas City, Missouri, area. Given the simulated tornado, a combinatorial graph considering the damages in terms of interconnectivity between different SCSI elements is derived. Reconstruction in the aftermath of the simulated tornado is optimized using the proposed framework to promote a rapid recovery of the SCSI. This research shows promising results when compared with the independent quantifiable data obtained from Joplin, Missouri, returning a resilience time of 22 days compared with 25 days reported by city and state officials.
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Exploring the impact of big data in economic geology using cloud-based synthetic sensor networks
NASA Astrophysics Data System (ADS)
Klump, J. F.; Robertson, J.
2015-12-01
In a market demanding lower resource prices and increasing efficiencies, resources companies are increasingly looking to the realm of real-time, high-frequency data streams to better measure and manage their minerals processing chain, from pit to plant to port. Sensor streams can include real-time drilling engineering information, data streams from mining trucks, and on-stream sensors operating in the plant feeding back rich chemical information. There are also many opportunities to deploy new sensor streams - unlike environmental monitoring networks, the mine environment is not energy- or bandwidth-limited. Although the promised efficiency dividends are inviting, the path to achieving these is difficult to see for most companies. As well as knowing where to invest in new sensor technology and how to integrate the new data streams, companies must grapple with risk-laden changes to their established methods of control to achieve maximum gains. What is required is a sandbox data environment for the development of analysis and control strategies at scale, allowing companies to de-risk proposed changes before actually deploying them to a live mine environment. In this presentation we describe our approach to simulating real-time scaleable data streams in a mine environment. Our sandbox consists of three layers: (a) a ground-truth layer that contains geological models, which can be statistically based on historical operations data, (b) a measurement layer - a network of RESTful synthetic sensor microservices which can simulate measurements of ground-truth properties, and (c) a control layer, which integrates the sensor streams and drives the measurement and optimisation strategies. The control layer could be a new machine learner, or simply a company's existing data infrastructure. Containerisation allows rapid deployment of large numbers of sensors, as well as service discovery to form a dynamic network of thousands of sensors, at a far lower cost than physically building the network.
NASA Astrophysics Data System (ADS)
Coppersmith, R.; Schultz-Fellenz, E. S.; Sussman, A. J.; Vigil, S.; Dzur, R.; Norskog, K.; Kelley, R.; Miller, L.
2015-12-01
While long-term objectives of monitoring and verification regimes include remote characterization and discrimination of surficial geologic and topographic features at sites of interest, ground truth data is required to advance development of remote sensing techniques. Increasingly, it is desirable for these ground-based or ground-proximal characterization methodologies to be as nimble, efficient, non-invasive, and non-destructive as their higher-altitude airborne counterparts while ideally providing superior resolution. For this study, the area of interest is an alluvial site at the Nevada National Security Site intended for use in the Source Physics Experiment's (Snelson et al., 2013) second phase. Ground-truth surface topographic characterization was performed using a DJI Inspire 1 unmanned aerial system (UAS), at very low altitude (< 5-30m AGL). 2D photographs captured by the standard UAS camera payload were imported into Agisoft Photoscan to create three-dimensional point clouds. Within the area of interest, careful installation of surveyed ground control fiducial markers supplied necessary targets for field collection, and information for model georectification. The resulting model includes a Digital Elevation Model derived from 2D imagery. It is anticipated that this flexible and versatile characterization process will provide point cloud data resolution equivalent to a purely ground-based LiDAR scanning deployment (e.g., 1-2cm horizontal and vertical resolution; e.g., Sussman et al., 2012; Schultz-Fellenz et al., 2013). In addition to drastically increasing time efficiency in the field, the UAS method also allows for more complete coverage of the study area when compared to ground-based LiDAR. Comparison and integration of these data with conventionally-acquired airborne LiDAR data from a higher-altitude (~ 450m) platform will aid significantly in the refinement of technologies and detection capabilities of remote optical systems to identify and detect surface geologic and topographic signatures of interest. This work includes a preliminary comparison of surface signatures detected from varying standoff distances to assess current sensor performance and benefits.
NASA Astrophysics Data System (ADS)
Brelsford, Christa; Shepherd, Doug
2013-09-01
In desert cities, securing sufficient water supply to meet the needs of both existing population and future growth is a complex problem with few easy solutions. Grass lawns are a major driver of water consumption and accurate measurements of vegetation area are necessary to understand drivers of changes in household water consumption. Measuring vegetation change in a heterogeneous urban environment requires sub-pixel estimation of vegetation area. Mixture Tuned Match Filtering has been successfully applied to target detection for materials that only cover small portions of a satellite image pixel. There have been few successful applications of MTMF to fractional area estimation, despite theory that suggests feasibility. We use a ground truth dataset over ten times larger than that available for any previous MTMF application to estimate the bias between ground truth data and matched filter results. We find that the MTMF algorithm underestimates the fractional area of vegetation by 5-10%, and calculate that averaging over 20 to 30 pixels is necessary to correct this bias. We conclude that with a large ground truth dataset, using MTMF for fractional area estimation is possible when results can be estimated at a lower spatial resolution than the base image. When this method is applied to estimating vegetation area in Las Vegas, NV spatial and temporal trends are consistent with expectations from known population growth and policy goals.
Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm
NASA Astrophysics Data System (ADS)
Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.
2017-09-01
Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Evaluating the state of the art in coreference resolution for electronic medical records
Bodnari, Andreea; Shen, Shuying; Forbush, Tyler; Pestian, John; South, Brett R
2012-01-01
Background The fifth i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records conducted a systematic review on resolution of noun phrase coreference in medical records. Informatics for Integrating Biology and the Bedside (i2b2) and the Veterans Affair (VA) Consortium for Healthcare Informatics Research (CHIR) partnered to organize the coreference challenge. They provided the research community with two corpora of medical records for the development and evaluation of the coreference resolution systems. These corpora contained various record types (ie, discharge summaries, pathology reports) from multiple institutions. Methods The coreference challenge provided the community with two annotated ground truth corpora and evaluated systems on coreference resolution in two ways: first, it evaluated systems for their ability to identify mentions of concepts and to link together those mentions. Second, it evaluated the ability of the systems to link together ground truth mentions that refer to the same entity. Twenty teams representing 29 organizations and nine countries participated in the coreference challenge. Results The teams' system submissions showed that machine-learning and rule-based approaches worked best when augmented with external knowledge sources and coreference clues extracted from document structure. The systems performed better in coreference resolution when provided with ground truth mentions. Overall, the systems struggled in solving coreference resolution for cases that required domain knowledge. PMID:22366294
SAGE to examine Earth's stratosphere
NASA Technical Reports Server (NTRS)
1979-01-01
The SAGE mission is discussed along with the role of the Nimbus 7 experiment. Other topics discussed include: ground truth measurements, data collection and processing, SAGE instrumentation, and launch sequence.
NASA Astrophysics Data System (ADS)
Köhler, P.; Huth, A.
2010-08-01
The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI) and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb) with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size). The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91%) if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60%) between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot (PSP) data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques such as LIDAR and radar interferometry have the potential to quantify the carbon contained in the vegetation, although this calculation contains due to the heterogeneity of the forest landscape structural uncertainties which restrict future applications to spatial averages of about one hectare in size. The uncertainties in AGB for a given canopy height are here 20-40% (95% confidence level) corresponding to a standard deviation of less than ± 10%. This uncertainty on the 1 ha-scale is much smaller than in the analysis of 0.04 ha-scale data. At this small scale (0.04 ha) AGB can only be calculated out of canopy height with an uncertainty which is at least of the magnitude of the signal itself due to the natural spatial heterogeneity of these forests.
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
NASA Technical Reports Server (NTRS)
Donovan, T. J.; Termain, P. A.; Henry, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The Cement oil field, Oklahoma, was a test site for an experiment designed to evaluate LANDSAT's capability to detect an alteration zone in surface rocks caused by hydrocarbon microseepage. Loss of iron and impregnation of sandstone by carbonate cements and replacement of gypsum by calcite were the major alteration phenomena at Cement. The bedrock alterations were partially masked by unaltered overlying beds, thick soils, and dense natural and cultivated vegetation. Interpreters, biased by detailed ground truth, were able to map the alteration zone subjectively using a magnified, filtered, and sinusoidally stretched LANDSAT composite image; other interpreters, unbiased by ground truth data, could not duplicate that interpretation.
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Design of the primary pre-TRMM and TRMM ground truth site
NASA Technical Reports Server (NTRS)
Garstang, Michael
1988-01-01
The primary objective of the Tropical Rain Measuring Mission (TRMM) were to: integrate the rain gage measurements with radar measurements of rainfall using the KSFC/Patrick digitized radar and associated rainfall network; delineate the major rain bearing systems over Florida using the Weather Service reported radar/rainfall distributions; combine the integrated measurements with the delineated rain bearing systems; use the results of the combined measurements and delineated rain bearing systems to represent patterns of rainfall which actually exist and contribute significantly to the rainfall to test sampling strategies and based on the results of these analyses decide upon the ground truth network; and complete the design begun in Phase 1 of a multi-scale (space and time) surface observing precipitation network centered upon KSFC. Work accomplished and in progress is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. BEGNAUD; ET AL
2000-09-01
Obtaining accurate seismic event locations is one of the most important goals for monitoring detonations of underground nuclear teats. This is a particular challenge at small magnitudes where the number of recording stations may be less than 20. Although many different procedures are being developed to improve seismic location, most procedures suffer from inadequate testing against accurate information about a seismic event. Events with well-defined attributes, such as latitude, longitude, depth and origin time, are commonly referred to as ground truth (GT). Ground truth comes in many forms and with many different levels of accuracy. Interferometric Synthetic Aperture Radar (InSAR)more » can provide independent and accurate information (ground truth) regarding ground surface deformation and/or rupture. Relating surface deformation to seismic events is trivial when events are large and create a significant surface rupture, such as for the M{sub w} = 7.5 event that occurred in the remote northern region of the Tibetan plateau in 1997. The event, which was a vertical strike slip even appeared anomalous in nature due to the lack of large aftershocks and had an associated surface rupture of over 180 km that was identified and modeled using InSAR. The east-west orientation of the fault rupture provides excellent ground truth for latitude, but is of limited use for longitude. However, a secondary rupture occurred 50 km south of the main shock rupture trace that can provide ground truth with accuracy within 5 km. The smaller, 5-km-long secondary rupture presents a challenge for relating the deformation to a seismic event. The rupture is believed to have a thrust mechanism; the dip of the fimdt allows for some separation between the secondary rupture trace and its associated event epicenter, although not as much as is currently observed from catalog locations. Few events within the time period of the InSAR analysis are candidates for the secondary rupture. Of these, we have identified six possible secondary rupture events (mb range = 3.7-4.8, with two magnitudes not reported), based on synthetic tests and residual analysis. All of the candidate events are scattered about the main and secondary rupture. A Joint Hypocenter Determination (JHD) approach applied to the aftershocks using global picks was not able to identify the secondary event. We added regional data and used propagation path corrections to reduce scatter and remove the 20-km bias seen in the main shock location. A&r preliminary analysis using several different velocity models, none of the candidate events proved to relocate on the surface trace of the secondary rupture. However, one event (mb = not reported) moved from a starting distance of {approximately}106 km to a relocated distance of {approximately}28 km from the secondary rupture, the only candidate event to relocate in relative proximity to the secondary rupture.« less
Assessment of atmospheric models for tele-infrasonic propagation
NASA Astrophysics Data System (ADS)
McKenna, Mihan; Hayek, Sylvia
2005-04-01
Iron mines in Minnesota are ideally located to assess the accuracy of available atmospheric profiles used in infrasound modeling. These mines are located approximately 400 km away to the southeast (142) of the Lac-Du-Bonnet infrasound station, IS-10. Infrasound data from June 1999 to March 2004 was analyzed to assess the effects of explosion size and atmospheric conditions on observations. IS-10 recorded a suite of events from this time period resulting in well constrained ground truth. This ground truth allows for the comparison of ray trace and PE (Parabolic Equation) modeling to the observed arrivals. The tele-infrasonic distance (greater than 250 km) produces ray paths that turn in the upper atmosphere, the thermosphere, at approximately 120 km to 140 km. Modeling based upon MSIS/HWM (Mass Spectrometer Incoherent Scatter/Horizontal Wind Model) and the NOGAPS (Navy Operational Global Atmospheric Prediction System) and NRL-GS2 (Naval Research Laboratory Ground to Space) augmented profiles are used to interpret the observed arrivals.
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.
1977-01-01
A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.
Atmospheric Precorrected Differential Absorption technique to retrieve columnar water vapor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlaepfer, D.; Itten, K.I.; Borel, C.C.
1998-09-01
Differential absorption techniques are suitable to retrieve the total column water vapor contents from imaging spectroscopy data. A technique called Atmospheric Precorrected Differential Absorption (APDA) is derived directly from simplified radiative transfer equations. It combines a partial atmospheric correction with a differential absorption technique. The atmospheric path radiance term is iteratively corrected during the retrieval of water vapor. This improves the results especially over low background albedos. The error of the method for various ground reflectance spectra is below 7% for most of the spectra. The channel combinations for two test cases are then defined, using a quantitative procedure, whichmore » is based on MODTRAN simulations and the image itself. An error analysis indicates that the influence of aerosols and channel calibration is minimal. The APDA technique is then applied to two AVIRIS images acquired in 1991 and 1995. The accuracy of the measured water vapor columns is within a range of {+-}5% compared to ground truth radiosonde data.« less
Vision-based localization for on-orbit servicing of a partially cooperative satellite
NASA Astrophysics Data System (ADS)
Oumer, Nassir W.; Panin, Giorgio; Mülbauer, Quirin; Tseneklidou, Anastasia
2015-12-01
This paper proposes ground-in-the-loop, model-based visual localization system based on transmitted images to ground, to aid rendezvous and docking maneuvers between a servicer and a target satellite. In particular, we assume to deal with a partially cooperative target, i.e. passive and without fiducial markers, but supposed at least to keep a controlled attitude, up to small fluctuations, so that the approach mainly involves translational motion. For the purpose of localization, video cameras provide an effective and relatively inexpensive solution, working at a wide range of distances with an increasing accuracy and robustness during the approach. However, illumination conditions in space are especially challenging, due to the direct sunlight exposure and to the glossy surface of a satellite, that creates strong reflections and saturations and therefore a high level of background clutter and missing detections. We employ a monocular camera for mid-range tracking (20 - 5 m) and stereo camera at close-range (5 - 0.5 m), with the respective detection and tracking methods, both using intensity edges and robustly dealing with the above issues. Our tracking system has been extensively verified at the facility of the European Proximity Operations Simulator (EPOS) of DLR, which is a very realistic ground simulation able to reproduce sunlight conditions through a high power floodlight source, satellite surface properties using multilayer insulation foils, as well as orbital motion trajectories with ground-truth data, by means of two 6 DOF industrial robots. Results from this large dataset show the effectiveness and robustness of our method against the above difficulties.
Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra
Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; ...
2017-01-13
The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newermore » algorithms can be better than the ones currently used.« less
Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian
2017-01-13
The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the Minimum Detectable Counts (MDC) for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newermore » algorithms can be better than the currently used ones.« less
Preliminary study of detection of buried landmines using a programmable hyperspectral imager
NASA Astrophysics Data System (ADS)
McFee, John E.; Ripley, Herb T.; Buxton, Roger; Thriscutt, Andrew M.
1996-05-01
Experiments were conducted to determine if buried mines could be detected by measuring the change in reflectance spectra of vegetation above mine burial sites. Mines were laid using hand methods and simulated mechanical methods and spectral images were obtained over a three month period using a casi hyperspectral imager scanned from a personnel lift. Mines were not detectable by measurement of the shift of the red edge of vegetative spectra. By calculating the linear correlation coefficient image, some mines in light vegetative cover (grass, grass/blueberries) were apparently detected, but mines buried in heavy vegetation cover (deep ferns) were not detectable. Due to problems with ground truthing, accurate probabilities of detection and false alarm rates were not obtained.
Use of satellite ocean color observations to refine understanding of global geochemical cycles
NASA Technical Reports Server (NTRS)
Walsh, J. J.; Dieterle, D. A.
1985-01-01
In October 1978, the first satellite-borne color sensor, the Coastal Zone Color Scanner (CZCS), was launched aboard Nimbus-7 with four visible and two infrared bands, permitting a sensitivity about 60 times that of the Landsat-1 multispectral scanner. The CZCS radiance data can be utilized to estimate ocean chlorophyll concentrations by detecting shifts in sea color, particularly in oceanic waters. The obtained data can be used in studies regarding problems of overfishing, and, in addition, in investigations concerning the consequences of man's accelerated extraction of nitrogen from the atmosphere and addition of carbon to the atmosphere. The satellite data base is considered along with a simulation analysis, and ships providing ground-truth chlorophyll measurements in the ocean.
Simulating nailfold capillaroscopy sequences to evaluate algorithms for blood flow estimation.
Tresadern, P A; Berks, M; Murray, A K; Dinsdale, G; Taylor, C J; Herrick, A L
2013-01-01
The effects of systemic sclerosis (SSc)--a disease of the connective tissue causing blood flow problems that can require amputation of the fingers--can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions.
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
NASA Astrophysics Data System (ADS)
Hack, Dan E.; Saville, Michael A.
2010-04-01
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
The use of the truth and deception in dementia care amongst general hospital staff.
Turner, Alex; Eccles, Fiona; Keady, John; Simpson, Jane; Elvish, Ruth
2017-08-01
Deceptive practice has been shown to be endemic in long-term care settings. However, little is known about the use of deception in dementia care within general hospitals and staff attitudes towards this practice. This study aimed to develop understanding of the experiences of general hospital staff and explore their decision-making processes when choosing whether to tell the truth or deceive a patient with dementia. This qualitative study drew upon a constructivist grounded theory approach to analyse data gathered from semi-structured interviews with a range of hospital staff. A model, grounded in participant experiences, was developed to describe their decision-making processes. Participants identified particular triggers that set in motion the need for a response. Various mediating factors influenced how staff chose to respond to these triggers. Overall, hospital staff were reluctant to either tell the truth or to lie to patients. Instead, 'distracting' or 'passing the buck' to another member of staff were preferred strategies. The issue of how truth and deception are defined was identified. The study adds to the growing research regarding the use of lies in dementia care by considering the decision-making processes for staff in general hospitals. Various factors influence how staff choose to respond to patients with dementia and whether deception is used. Similarities and differences with long-term dementia care settings are discussed. Clinical and research implications include: opening up the topic for further debate, implementing staff training about communication and evaluating the impact of these processes.
Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid
2018-01-01
Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS) and ground-truth (Point-Centred Quarter Method—PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present. PMID:29479500
Consensus-Based Sorting of Neuronal Spike Waveforms
Fournier, Julien; Mueller, Christian M.; Shein-Idelson, Mark; Hemberger, Mike
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained “ground truth” data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data. PMID:27536990
As-built design specification for MISMAP
NASA Technical Reports Server (NTRS)
Brown, P. M.; Cheng, D. E.; Tompkins, M. A. (Principal Investigator)
1981-01-01
The MISMAP program, which is part of the CLASFYT package, is described. The program is designed to compare classification values with ground truth values for a segment and produce a comparison map and summary table.
Ground truth report 1975 Phoenix microwave experiment. [Joint Soil Moisture Experiment
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1975-01-01
Direct measurements of soil moisture obtained in conjunction with aircraft data flights near Phoenix, Arizona in March, 1975 are summarized. The data were collected for the Joint Soil Moisture Experiment.
NASA Technical Reports Server (NTRS)
1984-01-01
Three mesoscale sounding data sets from the VISSR Atmospheric Sounder (VAS) produced using different retrieval techniques were evaluated of corresponding ground truth rawinsonde data for 6-7 March 1982. Mean, standard deviations, and RMS differences between the satellite and rawinsonde parameters were calculated over gridded fields in central Texas and Oklahoma. Large differences exist between each satellite data set and the ground truth data. Biases in the satellite temperature and moisture profiles seem extremely dependent upon the three dimensional structure of the atmosphere and range from 1 deg to 3 deg C for temperature and 3 deg to 6 deg C for dewpoint temperature. Atmospheric gradients of basic and derived parameters determined from the VAS data sets produced an adequate representation of the mesoscale environment but their magnitudes were often reduced by 30 to 50%.
A three-part geometric model to predict the radar backscatter from wheat, corn, and sorghum
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Eger, G. W., III; Kanemasu, E. T.
1982-01-01
A model to predict the radar backscattering coefficient from crops must include the geometry of the canopy. Radar and ground-truth data taken on wheat in 1979 indicate that the model must include contributions from the leaves, from the wheat head, and from the soil moisture. For sorghum and corn, radar and ground-truth data obtained in 1979 and 1980 support the necessity of a soil moisture term and a leaf water term. The Leaf Area Index (LAI) is an appropriate input for the leaf contribution to the radar response for wheat and sorghum, however the LAI generates less accurate values for the backscattering coefficient for corn. Also, the data for corn and sorghum illustrate the importance of the water contained in the stalks in estimating the radar response.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Using Seismic and Infrasonic Data to Identify Persistent Sources
NASA Astrophysics Data System (ADS)
Nava, S.; Brogan, R.
2014-12-01
Data from seismic and infrasound sensors were combined to aid in the identification of persistent sources such as mining-related explosions. It is of interest to operators of seismic networks to identify these signals in their event catalogs. Acoustic signals below the threshold of human hearing, in the frequency range of ~0.01 to 20 Hz are classified as infrasound. Persistent signal sources are useful as ground truth data for the study of atmospheric infrasound signal propagation, identification of manmade versus naturally occurring seismic sources, and other studies. By using signals emanating from the same location, propagation studies, for example, can be conducted using a variety of atmospheric conditions, leading to improvements to the modeling process for eventual use where the source is not known. We present results from several studies to identify ground truth sources using both seismic and infrasound data.
NASA Technical Reports Server (NTRS)
Russell, P. B. (Editor); Cunnold, D. M.; Grams, G. W.; Laver, J.; Mccormick, M. P.; Mcmaster, L. R.; Murcray, D. G.; Pepin, T. J.; Perry, T. W.; Planet, W. G.
1979-01-01
The ground truth plan is outlined for correlative measurements to validate the Stratospheric Aerosol and Gas Experiment (SAGE) sensor data. SAGE will fly aboard the Applications Explorer Mission-B satellite scheduled for launch in early 1979 and measure stratospheric vertical profiles of aerosol, ozone, nitrogen dioxide, and molecular extinction between 79 N and 79 S. latitude. The plan gives details of the location and times for the simultaneous satellite/correlative measurements for the nominal launch time, the rationale and choice of the correlative sensors, their characteristics and expected accuracies, and the conversion of their data to extinction profiles. In addition, an overview of the SAGE expected instrument performance and data inversion results are presented. Various atmospheric models representative of stratospheric aerosols and ozone are used in the SAGE and correlative sensor analyses.
Comparing Eyewitness-Derived Trajectories of Bright Meteors to Ground Truth Data
NASA Technical Reports Server (NTRS)
Moser, D. E.
2016-01-01
The NASA Meteoroid Environment Office is a US government agency tasked with analyzing meteors of public interest. When queried about a meteor observed over the United States, the MEO must respond with a characterization of the trajectory, orbit, and size within a few hours. If the event is outside meteor network coverage and there is no imagery recorded by the public, a timely assessment can be difficult if not impossible. In this situation, visual reports made by eyewitnesses may be the only resource available. This has led to the development of a tool to quickly calculate crude meteor trajectories from eyewitness reports made to the American Meteor Society. A description of the tool, example case studies, and a comparison to ground truth data observed by the NASA All Sky Fireball Network are presented.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
NASA Technical Reports Server (NTRS)
Graves, D. H.
1975-01-01
Research efforts are presented for the use of remote sensing in environmental surveys in Kentucky. Ground truth parameters were established that represent the vegetative cover of disturbed and undisturbed watersheds in the Cumberland Plateau of eastern Kentucky. Several water quality parameters were monitored of the watersheds utilized in the establishment of ground truth data. The capabilities of multistage-multispectral aerial photography and satellite imagery were evaluated in detecting various land use practices. The use of photographic signatures of known land use areas utilizing manually-operated spot densitometers was studied. The correlation of imagery signature data to water quality data was examined. Potential water quality predictions were developed from forested and nonforested watersheds based upon the above correlations. The cost effectiveness of predicting water quality values was evaluated using multistage and satellite imagery sampling techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; Levesque, I R; Research Institute of the McGill University Health Centre, Montreal, QC
Segmentation and registration of medical imaging data are two processes that can be integrated (a process termed regmentation) to iteratively reinforce each other, potentially improving efficiency and overall accuracy. A significant challenge is presented when attempting to validate the joint process particularly with regards to minimizing geometric uncertainties associated with the ground truth while maintaining anatomical realism. This work demonstrates a 4D MRI, PET, and CT compatible tissue phantom with a known ground truth for evaluating registration and segmentation accuracy. The phantom consists of a preserved swine lung connected to an air pump via a PVC tube for inflation. Mockmore » tumors were constructed from sea sponges contained within two vacuum-sealed compartments with catheters running into each one for injection of radiotracer solution. The phantom was scanned using a GE Discovery-ST PET/CT scanner and a 0.23T Phillips MRI, and resulted in anatomically realistic images. A bifurcation tracking algorithm was implemented to provide a ground truth for evaluating registration accuracy. This algorithm was validated using known deformations of up to 7.8 cm using a separate CT scan of a human thorax. Using the known deformation vectors to compare against, 76 bifurcation points were selected. The tracking accuracy was found to have maximum mean errors of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior directions, respectively. A pneumatic control system is under development to match the respiratory profile of the lungs to a breathing trace from an individual patient.« less
The verification of lightning location accuracy in Finland deduced from lightning strikes to trees
NASA Astrophysics Data System (ADS)
Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko
2016-05-01
We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
NASA Astrophysics Data System (ADS)
Chang, Chia-Hao; Chu, Tzu-How
2017-04-01
To control the rice production and farm usage in Taiwan, Agriculture and Food Agency (AFA) has published a series of policies to subsidize farmers to plant different crops or to practice fallow science 1983. Because of no efficient and examinable mechanism to verify the fallow fields surveyed by township office, illegal fallow fields were still repeated each year. In this research, we used remote sensing images, GIS data of Fields, and application records of fallow fields to establish an illegal fallow fields detecting method in Yulin County in central Taiwan. This method included: 1. collected multi-temporal images from FS-2 or SPOT series with 4 time periods; 2. combined the application records and GIS data of fields to verify the location of fallow fields; 3. conducted ground truth survey and classified images with ISODATA and Maximum Likelihood Classification (MLC); 4. defined the land cover type of fallow fields by zonal statistic; 5. verified accuracy with ground truth; 6. developed potential illegal fallow fields survey method and benefit estimation. We use 190 fallow fields with 127 legal and 63 illegal as ground truth and accuracies of illegal fallow field interpretation in producer and user are 71.43% and 38.46%. If township office surveyed 117 classified illegal fallow fields, 45 of 63 illegal fallow fields will be detected. By using our method, township office can save 38.42% of the manpower to detect illegal fallow fields and receive an examinable 71.43% producer accuracy.
Local involvement in measuring and governing carbon stocks in China, Vietnam, Indonesia and Laos
Michael Køie Poulsen
2013-01-01
An important element of MRV is to ensure accurate measurements of carbon stocks. Measuring trees on the ground may be needed for ground truthing of remote sensing results. It can also provide more accurate carbon stock monitoring than remote sensing alone. Local involvement in measuring trees for monitoring of carbon stocks may be advantageous in several ways....
NASA Astrophysics Data System (ADS)
Näthe, Paul; Becker, Rolf
2014-05-01
Soil moisture and plant available water are important environmental parameters that affect plant growth and crop yield. Hence, they are significant parameters for vegetation monitoring and precision agriculture. However, validation through ground-based soil moisture measurements is necessary for accessing soil moisture, plant canopy temperature, soil temperature and soil roughness with airborne hyperspectral imaging systems in a corresponding hyperspectral imaging campaign as a part of the INTERREG IV A-Project SMART INSPECTORS. At this point, commercially available sensors for matric potential, plant available water and volumetric water content are utilized for automated measurements with smart sensor nodes which are developed on the basis of open-source 868MHz radio modules, featuring a full-scale microcontroller unit that allows an autarkic operation of the sensor nodes on batteries in the field. The generated data from each of these sensor nodes is transferred wirelessly with an open-source protocol to a central node, the so-called "gateway". This gateway collects, interprets and buffers the sensor readings and, eventually, pushes the data-time series onto a server-based database. The entire data processing chain from the sensor reading to the final storage of data-time series on a server is realized with open-source hardware and software in such a way that the recorded data can be accessed from anywhere through the internet. It will be presented how this open-source based wireless sensor network is developed and specified for the application of ground truthing. In addition, the system's perspectives and potentials with respect to usability and applicability for vegetation monitoring and precision agriculture shall be pointed out. Regarding the corresponding hyperspectral imaging campaign, results from ground measurements will be discussed in terms of their contributing aspects to the remote sensing system. Finally, the significance of the wireless sensor network for the application of ground truthing shall be determined.
Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S
2018-01-01
A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.
Testing the pyramid truth wavefront sensor for NFIRAOS in the lab
NASA Astrophysics Data System (ADS)
Mieda, Etsuko; Rosensteiner, Matthias; van Kooten, Maaike; Veran, Jean-Pierre; Lardiere, Olivier; Herriot, Glen
2016-07-01
For today and future adaptive optics observations, sodium laser guide stars (LGSs) are crucial; however, the LGS elongation problem due to the sodium layer has to be compensated, in particular for extremely large telescopes. In this paper, we describe the concept of truth wavefront sensing as a solution and present its design using a pyramid wavefront sensor (PWFS) to improve NFIRAOS (Narrow Field InfraRed Adaptive Optics System), the first light adaptive optics system for Thirty Meter Telescope. We simulate and test the truth wavefront sensor function under a controlled environment using the HeNOS (Herzberg NFIRAOS Optical Simulator) bench, a scaled-down NFIRAOS bench at NRC-Herzberg. We also touch on alternative pyramid component options because despite recent high demands for PWFSs, we suffer from the lack of pyramid supplies due to engineering difficulties.
Fully Resolved Simulations of 3D Printing
NASA Astrophysics Data System (ADS)
Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai
2017-11-01
Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to ``print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the ``ground truth'' for reduced order models is discussed.
Evaluation of corn/soybeans separability using Thematic Mapper and Thematic Mapper Simulator data
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Badhwar, G. D.; Thompson, D. R.; Henderson, K. E.; Shen, S. S.; Sorensen, C. T.; Carnes, J. G.
1984-01-01
Multitemporal Thematic Mapper, Thematic Mapper Simulator, and detailed ground truth data were collected for a 9- by 11-km sample segment in Webster County, IA, in the summer of 1982. Three dates were acquired each with Thematic Mapper Simulator (June 7, June 23, and July 31) and Thematic Mapper (August 2, September 3, and October 21). The Thematic Mapper Simulator data were converted to equivalent TM count values using TM and TMS calibration data and model based estimates of atmospheric effects. The July 31, TMS image was compared to the August 2, TM image to verify the conversion process. A quantitative measure of proportion estimation variance (Fisher information) was used to evaluate the corn/soybeans separability for each TM band as a function of time during the growing season. The additional bands in the middle infrared allowed corn and soybeans to be separated much earlier than was possible with the visible and near-infrared bands alone. Using the TM and TMS data, temporal profiles of the TM principal components were developed. The greenness and brightness exhibited behavior similar to MSS greenness and brightness for corn and soybeans.
NASA Astrophysics Data System (ADS)
Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten
2014-03-01
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1982-01-01
An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.
A new class of methods for functional connectivity estimation
NASA Astrophysics Data System (ADS)
Lin, Wutu
Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
NASA Astrophysics Data System (ADS)
Wohlfahrt, Georg; Galvagno, Marta
2016-04-01
Ecosystem respiration (ER) and gross primary productivity (GPP) are key carbon cycle concepts. Global estimates of ER and GPP are largely based on measurements of the net ecosystem CO2 exchange by means of the eddy covariance method from which ER and GPP are inferred using so-called flux partitioning algorithms. Using a simple two-source model of ecosystem respiration, consisting of an above-ground respiration source driven by simulated air temperature and a below-ground respiration source driven by simulated soil temperature, we demonstrate that the two most popular flux partitioning algorithms are unable to provide unbiased estimates of daytime ER (ignoring any reduction of leaf mitochondrial respiration) and thus GPP. The bias is demonstrated to be either positive or negative and to depend in a complex fashion on the driving temperature, the ratio of above- to below-ground respiration, the respective temperature sensitivities, the soil depth where the below-ground respiration source originates from (and thus phase and amplitude of soil vs. surface temperature) and day length. The insights from the modeling analysis are subject to a reality check using direct measurements of ER at a grassland where measurements of ER were conducted both during night and day using automated opaque chambers. Consistent with the modeling analysis we find that using air temperature to extrapolate from nighttime to daytime conditions overestimates daytime ER (by 20% or ca. 65 gC m-2 over a 100 day study period), while soil temperature results in an underestimation (by 4% or 12 gC m-2). We conclude with practical recommendations for eddy covariance flux partitioning in the context of the FLUXNET project.
Helping medical students to acquire a deeper understanding of truth-telling.
Hurst, Samia A; Baroffio, Anne; Ummel, Marinette; Burn, Carine Layat
2015-01-01
Truth-telling is an important component of respect for patients' self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students' attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year). Open-ended responses were analysed to explore medical students' reported difficulties in breaking bad news. This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor-patient communication and ethics programs. Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas 'truthfulness' was a concern for the content of the message, 'truth-telling' included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.
Children's Lie-Telling to Conceal a Parent's Transgression: Legal Implications
Talwar, Victoria; Lee, Kang; Bala, Nicholas; Lindsay, R. C. L.
2008-01-01
Children's lie-telling behavior to conceal the transgression of a parent was examined in 2 experiments. In Experiment 1 (N = 137), parents broke a puppet and told their children (3–11-year-olds) not to tell anyone. Children answered questions about the event. Children's moral understanding of truth- and lie-telling was assessed by a second interviewer and the children then promised to tell the truth (simulating court competence examination procedures). Children were again questioned about what happened to the puppet. Regardless of whether the interview was conducted with their parent absent or present, most children told the truth about their parents' transgression. When the likelihood of the child being blamed for the transgression was reduced, significantly more children lied. There was a significant, yet limited, relation between children's lie-telling behavior and their moral understanding of lie- or truth-telling. Further, after children were questioned about issues concerning truth- and lie-telling and asked to promise to tell the truth, significantly more children told the truth about their parents' transgression. Experiment 2 (N = 64) replicated these findings, with children who were questioned about lies and who then promised to tell the truth more likely to tell the truth in a second interview than children who did not participate in this procedure before questioning. Implications for the justice system are discussed. PMID:15499823
Large-scale experimental technology with remote sensing in land surface hydrology and meteorology
NASA Technical Reports Server (NTRS)
Brutsaert, Wilfried; Schmugge, Thomas J.; Sellers, Piers J.; Hall, Forrest G.
1988-01-01
Two field experiments to study atmospheric and land surface processes and their interactions are summarized. The Hydrologic-Atmospheric Pilot Experiment, which tested techniques for measuring evaporation, soil moisture storage, and runoff at scales of about 100 km, was conducted over a 100 X 100 km area in France from mid-1985 to early 1987. The first International Satellite Land Surface Climatology Program field experiment was conducted in 1987 to develop and use relationships between current satellite measurements and hydrologic, climatic, and biophysical variables at the earth's surface and to validate these relationships with ground truth. This experiment also validated surface parameterization methods for simulation models that describe surface processes from the scale of vegetation leaves up to scales appropriate to satellite remote sensing.
Simulating Nailfold Capillaroscopy Sequences to Evaluate Algorithms for Blood Flow Estimation
Tresadern, P. A.; Berks, M.; Murray, A. K.; Dinsdale, G.; Taylor, C. J.; Herrick, A. L.
2016-01-01
The effects of systemic sclerosis (SSc) – a disease of the connective tissue causing blood flow problems that can require amputation of the fingers – can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions. PMID:24110268
Tracking composite material damage evolution using Bayesian filtering and flash thermography data
NASA Astrophysics Data System (ADS)
Gregory, Elizabeth D.; Holland, Steve D.
2016-05-01
We propose a method for tracking the condition of a composite part using Bayesian filtering of ash thermography data over the lifetime of the part. In this demonstration, composite panels were fabricated; impacted to induce subsurface delaminations; and loaded in compression over multiple time steps, causing the delaminations to grow in size. Flash thermography data was collected between each damage event to serve as a time history of the part. The ash thermography indicated some areas of damage but provided little additional information as to the exact nature or depth of the damage. Computed tomography (CT) data was also collected after each damage event and provided a high resolution volume model of damage that acted as truth. After each cycle, the condition estimate, from the ash thermography data and the Bayesian filter, was compared to 'ground truth'. The Bayesian process builds on the lifetime history of ash thermography scans and can give better estimates of material condition as compared to the most recent scan alone, which is common practice in the aerospace industry. Bayesian inference provides probabilistic estimates of damage condition that are updated as each new set of data becomes available. The method was tested on simulated data and then on an experimental data set.
As-built design specification for PARCLS
NASA Technical Reports Server (NTRS)
Tompkins, M. A. (Principal Investigator)
1981-01-01
The PARCLS program, part of the CLASFYG package, reads a parameter file created by the CLASFYG program and a pure pixel ground truth file in order to create to classification file of three separate crop categories in universal format.
NASA Technical Reports Server (NTRS)
Hart, W. G.; Ingle, S. J.; Davis, M. R.
1975-01-01
The detection of insect infestations and the density and distribution of host plants were studied using Skylab data, aerial photography and ground truth simultaneously. Additional ground truth and aerial photography were acquired between Skylab passes. Three test areas were selected: area 1, of high density citrus, was located northwest of Mission, Texas; area 2, 20 miles north of Weslaco, Texas, irrigated pastures and brush-covered land; area 3 covered the entire Lower Rio Grande Valley and adjacent areas of Mexico. A color composite picture of S-190A data showed patterns of vegetation on both sides of the Rio Grande River clearly delineating the possible avenues of entry of pest insects from Mexico into the United States or from the United States into Mexico. Vegetation that could be identified with conventional color and color IR film included: citrus, brush, sugarcane, alfalfa, irrigated and unimproved pastures.
NASA Technical Reports Server (NTRS)
Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.
1992-01-01
Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
NASA Technical Reports Server (NTRS)
Goldstein, H. W.; Bortner, M. H.; Grenda, R. N.; Dick, R.; Lebel, P. J.; Lamontagne, R. A.
1976-01-01
Two types of experiments were performed with a correlation interferometer on-board a Bell Jet Ranger 206 Helicopter. The first consisted of simultaneous ground- and air-truth measurements as the instrumented helicopter passed over the Cheverly site. The second consisted of several measurement flights in and around the national capital air quality control region (Washington, D.C.). The correlation interferometer data, the infrared Fourier spectrometer data, and the integrated altitude sampling data showed agreement within the errors of the individual measurements. High values for CO were found from the D.C. flight data to be reproducible and concentrated in areas of stop-and-go traffic. It is concluded, that pollutants at low altitudes are detectable from an air-borne platform by remote correlation interferometry and that the correlation interferometer measurements agree with ground- and air-truth data.
Operation of agricultural test fields for study of stressed crops by remote sensing
NASA Technical Reports Server (NTRS)
Toler, R. W.
1974-01-01
A test site for the study of winter wheat development and collection of ERTS data was established in September of 1973. The test site is a 10 mile square area located 12.5 miles west of Amarillo, Texas on Interstate Hwy. 40, in Randall and Potter counties. The center of the area is the Southwestern Great Plains Research Center at Bushland, Texas. Within the test area all wheat fields were identified by ground truth and designated irrigated or dryland. The fields in the test area other than wheat were identified as to pasture or the crop that was grown. A ground truth area of hard red winter wheat was established west of Hale Center, Texas. Maps showing the location of winter wheat fields in excess of 40 acres in size within a 10 mile radius were supplied NASA. Satellite data was collected for this test site (ERTS-1).
Spectral reflectance measurements of plant soil combinations
NASA Technical Reports Server (NTRS)
Macleod, N. H.
1972-01-01
Field and laboratory observations of plant and soil reflectance spectra were made to develop an understanding of the reflectance of solar energy by plants and soils. A related objective is the isolation of factors contributing to the image formed by multispectral scanners and return beam vidicons carried by ERTS or film-filter combinations used in the field or on aircraft. A set of objective criteria are to be developed for identifying plant and soil types and their changing condition through the seasons for application of space imagery to resource management. This is because the global scale of earth observations satellites requires objective rather than subjective techniques, particularly where ground truth is either not available or too costly to acquire. As the acquiring of ground truth for training sets may be impractical in many cases, attempts have been made to identify objectively standard responses which could be used for image interpretation.
Using remote sensing imagery to monitoring sea surface pollution cause by abandoned gold-copper mine
NASA Astrophysics Data System (ADS)
Kao, H. M.; Ren, H.; Lee, Y. T.
2010-08-01
The Chinkuashih Benshen mine was the largest gold-copper mine in Taiwan before the owner had abandoned the mine in 1987. However, even the mine had been closed, the mineral still interacts with rain and underground water and flowed into the sea. The polluted sea surface had appeared yellow, green and even white color, and the pollutants had carried by the coast current. In this study, we used the optical satellite images to monitoring the sea surface. Several image processing algorithms are employed especial the subpixel technique and linear mixture model to estimate the concentration of pollutants. The change detection approach is also applied to track them. We also conduct the chemical analysis of the polluted water to provide the ground truth validation. By the correlation analysis between the satellite observation and the ground truth chemical analysis, an effective approach to monitoring water pollution could be established.
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
NASA Astrophysics Data System (ADS)
Marzahn, P.; Ludwig, R.
2016-06-01
In this Paper the potential of multi parametric polarimetric SAR (PolSAR) data for soil surface roughness estimation is investigated and its potential for hydrological modeling is evaluated. The study utilizes microwave backscatter collected from the Demmin testsite in the North-East Germany during AgriSAR 2006 campaign using fully polarimetric L-Band airborne SAR data. For ground truthing extensive soil surface roughness in addition to various other soil physical properties measurements were carried out using photogrammetric image matching techniques. The correlation between ground truth roughness indices and three well established polarimetric roughness estimators showed only good results for Re[ρRRLL] and the RMS Height s. Results in form of multitemporal roughness maps showed only satisfying results due to the fact that the presence and development of particular plants affected the derivation. However roughness derivation for bare soil surfaces showed promising results.
Application of selected methods of remote sensing for detecting carbonaceous water pollution
NASA Technical Reports Server (NTRS)
Davis, E. M.; Fosbury, W. J.
1973-01-01
A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.
1976-01-01
The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.
Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM
2013-01-01
Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866
Portnoy, Sharon; Seed, Mike; Sled, John G; Macgowan, Christopher K
2017-12-01
We propose an analytical method for calculating blood hematocrit (Hct) and oxygen saturation (sO 2 ) from measurements of its T 1 and T 2 relaxation times. Through algebraic substitution, established two-compartment relationships describing R1=T1-1 and R2=T2-1 as a function of hematocrit and oxygen saturation were rearranged to solve for Hct and sO 2 in terms of R 1 and R 2 . Resulting solutions for Hct and sO 2 are the roots of cubic polynomials. Feasibility of the method was established by comparison of Hct and sO 2 estimates obtained from relaxometry measurements (at 1.5 Tesla) in cord blood specimens to ground-truth values obtained by blood gas analysis. Monte Carlo simulations were also conducted to assess the effect of T 1 , T 2 measurement uncertainty on precision of Hct and sO 2 estimates. Good agreement was observed between estimated and ground-truth blood properties (bias = 0.01; 95% limits of agreement = ±0.13 for Hct and sO 2 ). Considering the combined effects of biological variability and random measurement noise, we estimate a typical uncertainty of ±0.1 for Hct, sO 2 estimates. Results demonstrate accurate quantification of Hct and sO 2 from T 1 and T 2 . This method is applicable to noninvasive fetal vessel oximetry-an application where existing oximetry devices are unusable or require risky blood-sampling procedures. Magn Reson Med 78:2352-2359, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Motion compensation for cone-beam CT using Fourier consistency conditions
NASA Astrophysics Data System (ADS)
Berger, M.; Xia, Y.; Aichinger, W.; Mentl, K.; Unberath, M.; Aichert, A.; Riess, C.; Hornegger, J.; Fahrig, R.; Maier, A.
2017-09-01
In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to 3\\text{D} cone-beam CT. We derive an efficient cost function to compensate for 3\\text{D} motion using 2\\text{D} detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data.
NASA Technical Reports Server (NTRS)
Mcnider, Richard T.
1992-01-01
In the spring and summer of 1986, NASA/Marshall Space Flight Center (MSFC) will sponsor the Satellite Precipitation And Cloud Experiment (SPACE) to be conducted in the Central Tennessee, Northern Alabama, and Northeastern Mississippi area. The field program will incorporate high altitude flight experiments associated with meteorological remote sensor development for future space flight, and an investigation of precipitation processes associated with mesoscale and small convective systems. In addition to SPACE, the MIcroburst and Severe Thunderstorm (MIST) program, sponsored by the National Science Foundation (NSF), and the FAA-Lincoln Laboratory Operational Weather Study (FLOWS), sponsored by the Federal Aviation Administration (FAA), will take place concurrently within the SPACE experiment area. All three programs (under the joint acronym COHMEX (COoperative Huntsville Meteorological EXperiment)) will provide a data base for detailed analysis of mesoscale convective systems while providing ground truth comparisons for remote sensor evaluation. The purpose of this document is to outline the experiment design criteria for SPACE, and describe the special observing facilities and data sets that will be available under the COHMEX joint program. In addition to the planning of SPACE-COHMEX, this document covers three other parts of the program. The field program observations' main activity was the operation of an upper air rawinsonde network to provide ground truth for aircraft and spacecraft observations. Another part of the COHMEX program involved using boundary layer mesoscale models to study and simulate the initiation and organization of moist convection due to mesoscale thermal and mechanical circulations. The last part of the program was the collection, archival and distribution of the resulting COHMEX-SPACE data sets.
Towards component-based validation of GATE: aspects of the coincidence processor.
Moraes, Eder R; Poon, Jonathan K; Balakrishnan, Karthikayan; Wang, Wenli; Badawi, Ramsey D
2015-02-01
GATE is public domain software widely used for Monte Carlo simulation in emission tomography. Validations of GATE have primarily been performed on a whole-system basis, leaving the possibility that errors in one sub-system may be offset by errors in others. We assess the accuracy of the GATE PET coincidence generation sub-system in isolation, focusing on the options most closely modeling the majority of commercially available scanners. Independent coincidence generators were coded by teams at Toshiba Medical Research Unit (TMRU) and UC Davis. A model similar to the Siemens mCT scanner was created in GATE. Annihilation photons interacting with the detectors were recorded. Coincidences were generated using GATE, TMRU and UC Davis code and results compared to "ground truth" obtained from the history of the photon interactions. GATE was tested twice, once with every qualified single event opening a time window and initiating a coincidence check (the "multiple window method"), and once where a time window is opened and a coincidence check initiated only by the first single event to occur after the end of the prior time window (the "single window method"). True, scattered and random coincidences were compared. Noise equivalent count rates were also computed and compared. The TMRU and UC Davis coincidence generators agree well with ground truth. With GATE, reasonable accuracy can be obtained if the single window method option is chosen and random coincidences are estimated without use of the delayed coincidence option. However in this GATE version, other parameter combinations can result in significant errors. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Adopting a constructivist approach to grounded theory: implications for research design.
Mills, Jane; Bonner, Ann; Francis, Karen
2006-02-01
Grounded theory is a popular research methodology that is evolving to account for a range of ontological and epistemological underpinnings. Constructivist grounded theory has its foundations in relativism and an appreciation of the multiple truths and realities of subjectivism. Undertaking a constructivist enquiry requires the adoption of a position of mutuality between researcher and participant in the research process, which necessitates a rethinking of the grounded theorist's traditional role of objective observer. Key issues for constructivist grounded theorists to consider in designing their research studies are discussed in relation to developing a partnership with participants that enables a mutual construction of meaning during interviews and a meaningful reconstruction of their stories into a grounded theory model.
Ground-Truthing a Next Generation Snow Radar
NASA Astrophysics Data System (ADS)
Yan, S.; Brozena, J. M.; Gogineni, P. S.; Abelev, A.; Gardner, J. M.; Ball, D.; Liang, R.; Newman, T.
2016-12-01
During the early spring of 2016 the Naval Research Laboratory (NRL) performed a test of a next generation airborne snow radar over ground truth data collected on several areas of fast ice near Barrow, AK. The radar was developed by the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas, and includes several improvements compared to their previous snow radar. The new unit combines the earlier Ku-band and snow radars into a single unit with an operating frequency spanning the entire 2-18 GHz, an enormous bandwidth which provides the possibility of snow depth measurements with 1.5 cm range resolution. Additionally, the radar transmits on dual polarizations (H and V), and receives the signal through two orthogonally polarized Vivaldi arrays, each with 128 phase centers. The 8 sets of along-track phase centers are combined in hardware to improve SNR and narrow the beamwidth in the along-track, resulting in 8 cross-track effective phase centers which are separately digitized to allow for beam sharpening and forming in post-processing. Tilting the receive arrays 30 degrees from the horizontal also allows the formation of SAR images and the potential for estimating snow-water equivalent (SWE). Ground truth data (snow depth, density, salinity and SWE) were collected over several 60 m wide swaths that were subsequently overflown with the snow radar mounted on a Twin Otter. The radar could be operated in nadir (by beam steering the receive antennas to point beneath the aircraft) or side-looking modes. Results from the comparisons will be shown.
Parks, Connie L; Monson, Keith L
2018-05-01
This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.
Evaluation of dual-loop data accuracy using video ground truth data
DOT National Transportation Integrated Search
2002-01-01
Washington State Department of Transportation (WSDOT) initiated a : research project entitled Monitoring Freight on Puget Sound Freeways in September : 1999. Dual-loop data from the Seattle area freeway system were selected as the main data : s...
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
VAS demonstration: (VISSR Atmospheric Sounder) description
NASA Technical Reports Server (NTRS)
Montgomery, H. E.; Uccellini, L. W.
1985-01-01
The VAS Demonstration (VISSR Atmospheric Sounder) is a project designed to evaluate the VAS instrument as a remote sensor of the Earth's atmosphere and surface. This report describes the instrument and ground processing system, the instrument performance, the valiation as a temperature and moisture profiler compared with ground truth and other satellites, and assesses its performance as a valuable meteorological tool. The report also addresses the availability of data for scientific research.
NASA Astrophysics Data System (ADS)
Remy, Charlotte; Lalonde, Arthur; Béliveau-Nadeau, Dominic; Carrier, Jean-François; Bouchard, Hugo
2018-01-01
The purpose of this study is to evaluate the impact of a novel tissue characterization method using dual-energy over single-energy computed tomography (DECT and SECT) on Monte Carlo (MC) dose calculations for low-dose rate (LDR) prostate brachytherapy performed in a patient like geometry. A virtual patient geometry is created using contours from a real patient pelvis CT scan, where known elemental compositions and varying densities are overwritten in each voxel. A second phantom is made with additional calcifications. Both phantoms are the ground truth with which all results are compared. Simulated CT images are generated from them using attenuation coefficients taken from the XCOM database with a 100 kVp spectrum for SECT and 80 and 140Sn kVp for DECT. Tissue segmentation for Monte Carlo dose calculation is made using a stoichiometric calibration method for the simulated SECT images. For the DECT images, Bayesian eigentissue decomposition is used. A LDR prostate brachytherapy plan is defined with 125I sources and then calculated using the EGSnrc user-code Brachydose for each case. Dose distributions and dose-volume histograms (DVH) are compared to ground truth to assess the accuracy of tissue segmentation. For noiseless images, DECT-based tissue segmentation outperforms the SECT procedure with a root mean square error (RMS) on relative errors on dose distributions respectively of 2.39% versus 7.77%, and provides DVHs closest to the reference DVHs for all tissues. For a medium level of CT noise, Bayesian eigentissue decomposition still performs better on the overall dose calculation as the RMS error is found to be of 7.83% compared to 9.15% for SECT. Both methods give a similar DVH for the prostate while the DECT segmentation remains more accurate for organs at risk and in presence of calcifications, with less than 5% of RMS errors within the calcifications versus up to 154% for SECT. In a patient-like geometry, DECT-based tissue segmentation provides dose distributions with the highest accuracy and the least bias compared to SECT. When imaging noise is considered, benefits of DECT are noticeable if important calcifications are found within the prostate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juneja, P; Harris, E; Bamber, J
2014-06-01
Purpose: There is substantial observer variability in the delineation of target volumes for post-surgical partial breast radiotherapy because the tumour bed has poor x-ray contrast. This variability may result in substantial variations in planned dose distribution. Ultrasound elastography (USE) has an ability to detect mechanical discontinuities and therefore, the potential to image the scar and distortion in breast tissue architecture. The goal of this study was to compare USE techniques: strain elastography (SE), shear wave elastography (SWE) and acoustic radiation force impulse (ARFI) imaging using phantoms that simulate features of the tumour bed, for the purpose of incorporating USE inmore » breast radiotherapy planning. Methods: Three gelatine-based phantoms (10% w/v) containing: a stiff inclusion (gelatine 16% w/v) with adhered boundaries, a stiff inclusion (gelatine 16% w/v) with mobile boundaries and fluid cavity inclusion (to mimic seroma), were constructed and used to investigate the USE techniques. The accuracy of the elastography techniques was quantified by comparing the imaged inclusion with the modelled ground-truth using the Dice similarity coefficient (DSC). For two regions of interest (ROI), the DSC measures their spatial overlap. Ground-truth ROIs were modelled using geometrical measurements from B-mode images. Results: The phantoms simulating stiff scar tissue with adhered and mobile boundaries and seroma were successfully developed and imaged using SE and SWE. The edges of the stiff inclusions were more clearly visible in SE than in SWE. Subsequently, for all these phantoms the measured DSCs were found to be higher for SE (DSCs: 0.91–0.97) than SWE (DSCs: 0.68–0.79) with an average relative difference of 23%. In the case of seroma phantom, DSC values for SE and SWE were similar. Conclusion: This study presents a first attempt to identify the most suitable elastography technique for use in breast radiotherapy planning. Further analysis will include comparison of ARFI with SE and SWE. This work is supported by the EPSRC Platform Grant, reference number EP/H046526/1.« less
Memory for child sexual abuse information: simulated memory error and individual differences.
McWilliams, Kelly; Goodman, Gail S; Lyons, Kristen E; Newton, Jeremy; Avila-Mora, Elizabeth
2014-01-01
Building on the simulated-amnesia work of Christianson and Bylin (Applied Cognitive Psychology, 13, 495-511, 1999), the present research introduces a new paradigm for the scientific study of memory of childhood sexual abuse information. In Session 1, participants mentally took the part of an abuse victim as they read an account of the sexual assault of a 7-year-old. After reading the narrative, participants were randomly assigned to one of four experimental conditions: They (1) rehearsed the story truthfully (truth group), (2) left out the abuse details of the story (omission group), (3) lied about the abuse details to indicate that no abuse had occurred (commission group), or (4) did not recall the story during Session 1 (no-rehearsal group). One week later, participants returned for Session 2 and were asked to truthfully recall the narrative. The results indicated that, relative to truthful recall, untruthful recall or no rehearsal at Session 1 adversely affected memory performance at Session 2. However, untruthful recall resulted in better memory than did no rehearsal. Moreover, gender, PTSD symptoms, depression, adult attachment, and sexual abuse history significantly predicted memory for the childhood sexual abuse scenario. Implications for theory and application are discussed.
A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System.
Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan
2017-01-01
We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.
Helping medical students to acquire a deeper understanding of truth-telling.
Hurst, Samia A; Baroffio, Anne; Ummel, Marinette; Layat Burn, Carine
2015-01-01
Problem Truth-telling is an important component of respect for patients' self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. Intervention We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students' attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year). Open-ended responses were analysed to explore medical students' reported difficulties in breaking bad news. Context This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor-patient communication and ethics programs. Outcome Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas 'truthfulness' was a concern for the content of the message, 'truth-telling' included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Lessons learned Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.
Evaluation of LANDSAT-4 TM and MSS ground geometry performance without ground control
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A.
1983-01-01
LANDSAT thematic mapper P-data of Washington, D.C., Harrisburg, PA, and Salton Sea, CA were analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known earth-surface geometry. Several tests of data geometry were performed. Intra-band and inter-band correlation and registration were investigated, exclusive of map-based ground truth. Specifically, the magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) were computed, and the inter-band integrity of registration was analyzed.
Composition and assembly of a spectral and agronomic data base for 1980 spring small grain segments
NASA Technical Reports Server (NTRS)
Helmer, D.; Krantz, J.; Kinsler, M.; Tomkins, M.
1983-01-01
A data set was assembled which consolidates the LANDSAT spectral data, ground truth observation data, and analyst cloud screening data for 28 spring small grain segments collected during the 1980 crop year.
GF-7 Imaging Simulation and Dsm Accuracy Estimate
NASA Astrophysics Data System (ADS)
Yue, Q.; Tang, X.; Gao, X.
2017-05-01
GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.
A novel rumor diffusion model considering the effect of truth in online social media
NASA Astrophysics Data System (ADS)
Sun, Ling; Liu, Yun; Zeng, Qing-An; Xiong, Fei
2015-12-01
In this paper, we propose a model to investigate how truth affects rumor diffusion in online social media. Our model reveals a relation between rumor and truth — namely, when a rumor is diffusing, the truth about the rumor also diffuses with it. Two patterns of the agents used to identify rumor, self-identification and passive learning are taken into account. Combining theoretical proof and simulation analysis, we find that the threshold value of rumor diffusion is negatively correlated to the connectivity between nodes in the network and the probability β of agents knowing truth. Increasing β can reduce the maximum density of the rumor spreaders and slow down the generation speed of new rumor spreaders. On the other hand, we conclude that the best rumor diffusion strategy must balance the probability of forwarding rumor and the probability of agents losing interest in the rumor. High spread rate λ of rumor would lead to a surge in truth dissemination which will greatly limit the diffusion of rumor. Furthermore, in the case of unknown λ, increasing β can effectively reduce the maximum proportion of agents who do not know the truth, but cannot narrow the rumor diffusion range in a certain interval of β.
Improved Gridded Aerosol Data for India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gueymard, C.; Sengupta, M.
Using point data from ground sites in and around India equipped with multiwavelength sunphotometers, as well as gridded data from space measurements or from existing aerosol climatologies, an improved gridded database providing the monthly aerosol optical depth at 550 nm (AOD550) and Angstrom exponent (AE) over India is produced. Data from 83 sunphotometer sites are used here as ground truth tocalibrate, optimally combine, and validate monthly gridded data during the period from 2000 to 2012.
Dynamic data integration and stochastic inversion of a confined aquifer
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.
2013-12-01
Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.
Predicting mining activity with parallel genetic algorithms
Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,
2005-01-01
We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.
Ground Truthing the 'Conventional Wisdom' of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System
Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan
2017-01-01
We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare. PMID:28045441
A Unified Framework for Brain Segmentation in MR Images
Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.
2015-01-01
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978
Daza, Iván G.; Bergasa, Luis M.; Bronte, Sebastián; Yebes, J. Javier; Almazán, Javier; Arroyo, Roberto
2014-01-01
This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study. PMID:24412904
NASA Astrophysics Data System (ADS)
Wismüller, Axel; DSouza, Adora M.; Abidin, Anas Z.; Wang, Xixi; Hobbs, Susan K.; Nagarajan, Mahesh B.
2015-03-01
Echo state networks (ESN) are recurrent neural networks where the hidden layer is replaced with a fixed reservoir of neurons. Unlike feed-forward networks, neuron training in ESN is restricted to the output neurons alone thereby providing a computational advantage. We demonstrate the use of such ESNs in our mutual connectivity analysis (MCA) framework for recovering the primary motor cortex network associated with hand movement from resting state functional MRI (fMRI) data. Such a framework consists of two steps - (1) defining a pair-wise affinity matrix between different pixel time series within the brain to characterize network activity and (2) recovering network components from the affinity matrix with non-metric clustering. Here, ESNs are used to evaluate pair-wise cross-estimation performance between pixel time series to create the affinity matrix, which is subsequently subject to non-metric clustering with the Louvain method. For comparison, the ground truth of the motor cortex network structure is established with a task-based fMRI sequence. Overlap between the primary motor cortex network recovered with our model free MCA approach and the ground truth was measured with the Dice coefficient. Our results show that network recovery with our proposed MCA approach is in close agreement with the ground truth. Such network recovery is achieved without requiring low-pass filtering of the time series ensembles prior to analysis, an fMRI preprocessing step that has courted controversy in recent years. Thus, we conclude our MCA framework can allow recovery and visualization of the underlying functionally connected networks in the brain on resting state fMRI.
The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images
NASA Astrophysics Data System (ADS)
Fedotova, S.; Seredin, O.; Kushnir, O.
2017-05-01
In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.
2013-12-01
The Southwest U.S. Seismo-Acoustic Network (SUSSAN) is a collaborative project designed to produce infrasound event detection bulletins for the infrasound community for research purposes. We are aggregating a large, unique, near real-time data set with available ground truth information from seismo-acoustic arrays across New Mexico, Utah, Nevada, California, Texas and Hawaii. The data are processed in near real-time (~ every 20 minutes) with detections being made on individual arrays and locations determined for networks of arrays. The detection and location data are then combined with any available ground truth information and compiled into a bulletin that will be released to the general public directly and eventually through the IRIS infrasound event bulletin. We use the open source Earthworm seismic data aggregation software to acquire waveform data either directly from the station operator or via the Incorporated Research Institutions for Seismology Data Management Center (IRIS DMC), if available. The data are processed using InfraMonitor, a powerful infrasound event detection and localization software program developed by Stephen Arrowsmith at Los Alamos National Laboratory (LANL). Our goal with this program is to provide the infrasound community with an event database that can be used collaboratively to study various natural and man-made sources. We encourage participation in this program directly or by making infrasound array data available through the IRIS DMC or other means. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. R&A 5317326
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Rueckauer, Bodo; Delbruck, Tobi
2016-01-01
In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639
Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal
2016-05-01
Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
NASA Astrophysics Data System (ADS)
Berglund, J.; Mattila, J.; Rönnberg, O.; Heikkilä, J.; Bonsdorff, E.
2003-04-01
Submerged rooted macrophytes and drift algae were studied in shallow (0-1 m) brackish soft-bottom bays in the Åland Islands, N Baltic Sea, in 1997-2000. The study was performed by aerial photography and ground-truth sampling and the compatibility of the methods was evaluated. The study provided quantitative results on seasonal and inter-annual variation in growth, distribution and biomass of submerged macrophytes and drift algae. On an average, 18 submerged macrophyte species occurred in the studied bays. The most common species, by weight and occurrence, were Chara aspera, Cladophora glomerata, Pilayella littoralis and Potamogeton pectinatus. Filamentous green algae constituted 45-70% of the biomass, charophytes 25-40% and vascular plants 3-18%. A seasonal pattern with a peak in biomass in July-August was found and the mean biomass was negatively correlated with exposure. There were statistically significant differences in coverage among years, and among levels of exposure. The coverage was highest when exposure was low. Both sheltered and exposed bays were influenced by drift algae (30 and 60% occurrence in July-August) and there was a positive correlation between exposure and occurrence of algal accumulations. At exposed sites, most of the algae had drifted in from other areas, while at sheltered ones they were mainly of local origin. Data obtained by aerial photography and ground-truth sampling showed a high concordance, but aerial photography gave a 9% higher estimate than the ground-truth samples. The results can be applied in planning of monitoring and management strategies for shallow soft-bottom areas under potential threat of drift algae.
A new device for acquiring ground truth on the absorption of light by turbid waters
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator); Srna, R.; Treasure, W.
1974-01-01
The author has identified the following significant results. A new device, called a Spectral Attenuation Board, has been designed and tested, which enables ERTS-1 sea truth collection teams to monitor the attenuation depths of three colors continuously, as the board is being towed behind a boat. The device consists of a 1.2 x 1.2 meter flat board held below the surface of the water at a fixed angle to the surface of the water. A camera mounted above the water takes photographs of the board. The resulting film image is analyzed by a micro-densitometer trace along the descending portion of the board. This yields information on the rate of attenuation of light penetrating the water column and the Secchi depth. Red and green stripes were painted on the white board to approximate band 4 and band 5 of the ERTS MSS so that information on the rate of light absorption by the water column of light in these regions of the visible spectrum could be concurrently measured. It was found that information from a red, green, and white stripe may serve to fingerprint the composition of the water mass. A number of these devices, when automated, could also be distributed over a large region to provide a cheap method of obtaining valuable satellite ground truth data at present time intervals.
NASA Astrophysics Data System (ADS)
Tang, G.; Li, C.; Hong, Y.; Long, D.
2017-12-01
Proliferation of satellite and reanalysis precipitation products underscores the need to evaluate their reliability, particularly over ungauged or poorly gauged regions. However, it is really challenging to perform such evaluations over regions lacking ground truth data. Here, using the triple collocation (TC) method that is capable of evaluating relative uncertainties in different products without ground truth, we evaluate five satellite-based precipitation products and comparatively assess uncertainties in three types of independent precipitation products, e.g., satellite-based, ground-observed, and model reanalysis over Mainland China, including a ground-based precipitation dataset (the gauge based daily precipitation analysis, CGDPA), the latest version of the European reanalysis agency reanalysis (ERA-interim) product, and five satellite-based products (i.e., 3B42V7, 3B42RT of TMPA, IMERG, CMORPH-CRT, PERSIANN-CDR) on a regular 0.25° grid at the daily timescale from 2013 to 2015. First, the effectiveness of the TC method is evaluated by comparison with traditional methods based on ground observations in a densely gauged region. Results show that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) are close to those based on the traditional method with a maximum difference only up to 0.08 and 0.71 (mm/day) for CC and RMSE, respectively. Then, the TC method is applied to Mainland China and the Tibetan Plateau (TP). Results indicate that: (1) the overall performance of IMERG is better than the other satellite products over Mainland China; (2) over grid cells without rain gauges in the TP, IMERG and ERA show better performance than CGDPA, indicating the potential of remote sensing and reanalysis data over these regions and the inherent uncertainty of CGDPA due to interpolation using sparsely gauged data; (3) both TMPA-3B42 and CMORPH-CRT have some unexpected CC values over certain grid cells that contain water bodies, reaffirming the overestimation of precipitation over inland water bodies. Overall, the TC method provides not only reliable cross-validation results of precipitation estimates over Mainland China but also a new perspective as to compressively assess multi-source precipitation products, particularly over poorly gauged regions.
NASA Technical Reports Server (NTRS)
Atlas, D. (Editor); Thiele, O. W. (Editor)
1981-01-01
Global climate, agricultural uses for precipitation information, hydrological uses for precipitation, severe thunderstorms and local weather, global weather are addressed. Ground truth measurement, visible and infrared techniques, microwave radiometry and hybrid precipitation measurements, and spaceborne radar are discussed.
A Ranking-Theoretic Approach to Conditionals
ERIC Educational Resources Information Center
Spohn, Wolfgang
2013-01-01
Conditionals somehow express conditional beliefs. However, conditional belief is a bi-propositional attitude that is generally not truth-evaluable, in contrast to unconditional belief. Therefore, this article opts for an expressivistic semantics for conditionals, grounds this semantics in the arguably most adequate account of conditional belief,…
Ground Truth in Building Human Security
2012-11-01
Peace, Charles Call and Vanessa Wyeth, Editors, International Peace Institute, 2008, pp. 164-5. 9. Sandra F. Joireman, Where There is No...that company. http://grm.thomsonreuters.com/news/july-2011/thomson-reuters- completes-acquisition-of-manatron/ 73. P.J.M. van Oosterom, C.H.J
Ground Truthing the ‘Conventional Wisdom’ of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
Helping medical students to acquire a deeper understanding of truth-telling
Hurst, Samia A.; Baroffio, Anne; Ummel, Marinette; Burn, Carine Layat
2015-01-01
Problem Truth-telling is an important component of respect for patients’ self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. Intervention We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students’ attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year). Open-ended responses were analysed to explore medical students’ reported difficulties in breaking bad news. Context This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor–patient communication and ethics programs. Outcome Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas ‘truthfulness’ was a concern for the content of the message, ‘truth-telling’ included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Lessons learned Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy. PMID:26563958
Comparisons of Ground Truth and Remote Spectral Measurements of the FORMOSAT and ANDE Spacecrafts
NASA Technical Reports Server (NTRS)
JorgensenAbercromby, Kira; Hamada, Kris; Okada, Jennifer; Guyote, Michael; Barker, Edwin
2006-01-01
Determining the material type of objects in space is conducted using laboratory spectral reflectance measurements from common spacecraft materials and comparing the results to remote spectra. This past year, two different ground-truth studies commenced. The first, FORMOSAT III, is a Taiwanese set of six satellites to be launched in March 2006. The second is ANDE (Atmospheric Neutral Density Experiment), a Naval Research Laboratory set of two satellites set to launch from the Space Shuttle in November 2006. Laboratory spectra were obtained of the spacecraft and a model of the anticipated spectra response was created for each set of satellites. The model takes into account phase angle and orientation of the spacecraft relative to the observer. Once launched, the spacecraft are observed once a month to determine the space aging effects of materials as deduced from the remote spectra. Preliminary results will be shown of the FORMOSAT III comparison with laboratory data and remote data while results from only the laboratory data will be shown for the ANDE spacecraft.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
The ground-truth problem for satellite estimates of rain rate
NASA Technical Reports Server (NTRS)
North, Gerald R.; Valdes, Juan B.; Eunho, HA; Shen, Samuel S. P.
1994-01-01
In this paper a scheme is proposed to use a point raingage to compare contemporaneous measurements of rain rate from a single-field-of-view (FOV) estimate based on a satellite remote sensor such as a microwave radiometer. Even in the ideal case the measurements are different because one is at a point and the other is an area average over the field of view. Also the point gage will be located randomly inside the field of view on different overpasses. A space-time spectral formalism is combined with a simple stochastic rain field to find the mean-square deviations between the two systems. It is found that by combining about 60 visits of the satellite to the ground-truth site, the expected error can be reduced to about 10% of the standard deviation of the fluctuations of the systems alone. This seems to be a useful level of tolerance in terms of isolating and evaluating typical biases that might be contaminating retrieval algorithms.
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Identification of marsh vegetation and coastal land use in ERTS-1 imagery
NASA Technical Reports Server (NTRS)
Klemas, V.; Daiber, F. C.; Bartlett, D. S.
1973-01-01
Coastal vegetation species appearing in the ERTS-1 images taken of Delaware Bay on August 16, and October 10, 1972 have been correlated with ground truth vegetation maps and imagery obtained from high altitude RB-57 and U-2 overflights. The vegetation maps of the entire Delaware Coast were prepared during the summer of 1972 and checked out with ground truth data collected on foot, in small boats, and from low-altitude aircraft. Multispectral analysis of high altitude RB-57 and U-2 photographs indicated that five vegetation communities could be clearly discriminated from 60,000 feet altitude including: (1) salt marsh cord grass, (2) salt marsh hay and spike grass, (3) reed grass, (4) high tide bush and sea myrtle, and (5) a group of fresh water species found in impoundments built to attract water fowl. All of these species are shown in fifteen overlay maps, covering all of Delaware's wetlands prepared to match the USGS topographic map size of 1:24,000.
Toward building a comprehensive data mart
NASA Astrophysics Data System (ADS)
Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.
2004-04-01
To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.
The use of remote sensing in solving Florida's geological and coastal engineering problems
NASA Technical Reports Server (NTRS)
Brooks, H. K.; Ruth, B. E.; Wang, Y. H.; Ferguson, R. L.
1977-01-01
LANDSAT imagery and NASA high altitude color infrared (CIR) photography were used to select suitable sites for sanitary landfill in Volusia County, Florida and to develop techniques for preventing sand deposits in the Clearwater inlet. Activities described include the acquisition of imagery, its analysis by the IMAGE 100 system, conventional photointerpretation, evaluation of existing data sources (vegetation, soil, and ground water maps), site investigations for ground truth, and preparation of displays for reports.
Localizing Ground Penetrating RADAR: A Step Towards Robust Autonomous Ground Vehicle Localization
2015-05-27
truth reference unit is coupled with a local base station that allows local 2cm accuracy location measurements. The RT3003 uses a MEMS -based IMU and...of different electromagnetic properties; for example the interface between soil and pipes , roots, or rocks. However, it is not these discrete...depth is determined by soil losses caused by Joule heating and dipole losses. High conductivity soils, such as those with high moisture and salinity
Image simulation for automatic license plate recognition
NASA Astrophysics Data System (ADS)
Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José
2012-01-01
Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"
A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...
Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds
USDA-ARS?s Scientific Manuscript database
The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...
NASA Astrophysics Data System (ADS)
Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.
2015-05-01
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Herrmann, C; Groh, C M; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-05-07
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [(18)F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-01-01
Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery. PMID:25884892
Tactical Decision Making under Categorical Uncertainty with Applications to Modeling and Simulation
2008-12-01
Method, Rene Descartes (1637) addressed the importance of discovery and truth through science. To accomplish this, he asked man to “reject all...previous knowledge, opinion, and customs” ( Descartes , 1637, p. 21). He writes: The first was never to accept anything as true which I did not clearly know...and distinctly as to exclude all possibility of doubt. Descartes was arguing two points. First, knowledge, and therefore truth, cannot, and must
NASA Technical Reports Server (NTRS)
Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.
1975-01-01
Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.
Global Ground Truth Data Set with Waveform and Improved Arrival Data
2006-09-29
local network. Sb) c) -I w .15Ř’ *•S| -195.4’ - .. " -in’ 410 -1115 -IisA " l Figure 4. (a) RCA geometry for the Kilauea Volcano south flank, Hawaii ...Seismic Research Review: Ground-Based Nuclear Explosion Monitoring Technologies Our next example (Figure 4) is from the south flank of Kilauea Volcano ...status all 56 events, including the two offshore events near the underwater volcano , Loihi, off the coast of Hawaii and more than 20 km outside the
Equator and High-Latitude Ionosphere-to-Magnetosphere Research
2010-12-04
characterizing plasma velocity profile in the heated region above HAARP has been clearly established. Specification of D region absorption from Digisonde...Electron density profile, Ground truth, Cal/Val, Doppler skymap, HAARP , Plasma velocity profile, Ionogram autoscaling, D region absorption...2 3 HAARP INVESTIGATIONS ............................................................................ 5 3.1
Development of Mine Explosion Ground Truth Smart Sensors
2011-09-01
interest. The two candidates are the GS11-D by Oyo Geospace that is used extensively in seismic monitoring of geothermal fields and the Sensor Nederland SM...Technologies 853 Figure 4. Our preferred sensors and processor for the GTMS. (a) Sensor Nederland SM-6 geophone with emplacement spike. (b
Initial validation of the Soil Moisture Active Passive mission using USDA-ARS watersheds
USDA-ARS?s Scientific Manuscript database
The Soil Moisture Active Passive (SMAP) Mission was launched in January 2015 to measure global surface soil moisture. The calibration and validation program of SMAP relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The U...
Toward a Methodology of Death: Deleuze's "Event" as Method for Critical Ethnography
ERIC Educational Resources Information Center
Rodriguez, Sophia
2016-01-01
This article examines how qualitative researchers, specifically ethnographers, might utilize complex philosophical concepts in order to disrupt the normative truth-telling practices embedded in social science research. Drawing on my own research experiences, I move toward a methodology of death (for researcher/researched alike) grounded in…
Global Ground Truth Data Set with Waveform and Arrival Data
2007-07-30
Redonda, Leeward Islands 15/13 Rowe, C.A., C.H. Thurber and R.A. White, Dome growth behavior at Soufriere Hills Volcano, Montserrat, revealed by...Thurber and R.A. White, Dome growth behavior at Soufriere Hills Volcano, Montserrat, revealed by relocation of volcanic event swarms, 1995-1996, J. Volc
An integrated approach to mapping forest conditions in the Southern Appalachians (North Carolina)
Weimin Xi; Lei Wang; Andrew G Birt; Maria D. Tchakerian; Robert N. Coulson; Kier D. Klepzig
2009-01-01
Accurate and continuous forest cover information is essential for forest management and restoration (SAMAB 1996, Xi et al. 2007). Ground-truthed, spatially explicit forest data, however, are often limited to federally managed land or large-scale commercial forestry operations where forest inventories are regularly collected. Moreover,...
The Role of Science in Behavioral Disorders.
ERIC Educational Resources Information Center
Kauffman, James M.
1999-01-01
A scientific, rule-governed approach to solving problems suggests the following assumptions: we need different rules for different purposes; rules are grounded in values; the origins and applications of rules are often misunderstood; personal experience and idea popularity are unreliable; and all truths are tentative. Each assumption is related to…
Spatially Explicit West Nile Virus Risk Modeling in Santa Clara County, CA
USDA-ARS?s Scientific Manuscript database
A geographic information systems model designed to identify regions of West Nile virus (WNV) transmission risk was tested and calibrated with data collected in Santa Clara County, California. American Crows that died from WNV infection in 2005, provided spatial and temporal ground truth. When the mo...
Spatially explicit West Nile virus risk modeling in Santa Clara County, California
USDA-ARS?s Scientific Manuscript database
A previously created Geographic Information Systems model designed to identify regions of West Nile virus (WNV) transmission risk is tested and calibrated in Santa Clara County, California. American Crows that died from WNV infection in 2005 provide the spatial and temporal ground truth. Model param...
Near infrared color aerial photography (-1:7200) of Yaquina Bay, Oregon, flown at minus tides during summer months of 1997 was used to produce digital stereo ortho-photographs covering tidally exposed eelgrass habitat. GIS analysis coupled with GPS positioning of ground-truth da...
76 FR 36934 - Endangered Species; Marine Mammals; Receipt of Applications for Permit
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-23
... quantitative information or studies; and (2) Those that include citations to, and analyses of, the applicable... bears (Ursus maritimus) by adjusting the video camera equipment and conducting aerial surveys using FLIR (forward looking infrared) and ground-truth surveys with snowmobiles near dens for the purpose of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Icerman, L.; Starkey, A.; Trentman, N.
1981-08-01
Magnetic, gravity, seismic-refraction, and seismic-reflection profiles across the Las Alturas Geothermal Anomaly, New Mexico, are presented. Studies in the Socorro area include the following: seismic measurements of the tertiary fill in the Rio Grande Depression west of Socorro, geothermal data availability for computer simulation in the Socorro Peak KGRA, and ground water circulation in the Socorro Geothermal Area. Regional geothermal exploration in the Truth or Consequences Area includes: geological mapping of the Mud Springs Mountains, hydrogeology of the thermal aquifer, and electrical-resistivity investigation of the geothermal potential. Other studies included are: geothermal exploration with electrical methods near Vado, Chamberino, andmore » Mesquite; a heat-flow study of Dona Ana County; preliminary heat-flow assessment of Southeast Luna County; active fault analysis and radiometric dating of young basalts in southern New Mexico; and evaluation of the geothermal potential of the San Juan Basin in northwestern New Mexico.« less
Polarized skylight navigation.
Hamaoui, Moshe
2017-01-20
Vehicle state estimation is an essential prerequisite for navigation. The present approach seeks to use skylight polarization to facilitate state estimation under autonomous unconstrained flight conditions. Atmospheric scattering polarizes incident sunlight such that solar position is mathematically encoded in the resulting skylight polarization pattern. Indeed, several species of insects are able to sense skylight polarization and are believed to navigate polarimetrically. Sun-finding methodologies for polarized skylight navigation (PSN) have been proposed in the literature but typically rely on calibration updates to account for changing atmospheric conditions and/or are limited to 2D operation. To address this technology gap, a gradient-based PSN solution is developed based upon the Rayleigh sky model. The solution is validated in simulation, and effects of measurement error and changing atmospheric conditions are investigated. Finally, an experimental effort is described wherein polarimetric imagery is collected, ground-truth is established through independent imager-attitude measurement, the gradient-based PSN solution is applied, and results are analyzed.
Modeling of spectral signatures of littoral waters
NASA Astrophysics Data System (ADS)
Haltrin, Vladimir I.
1997-12-01
The spectral values of remotely obtained radiance reflectance coefficient (RRC) are compared with the values of RRC computed from inherent optical properties measured during the shipborne experiment near the West Florida coast. The model calculations are based on the algorithm developed at the Naval Research Laboratory at Stennis Space Center and presented here. The algorithm is based on the radiation transfer theory and uses regression relationships derived from experimental data. Overall comparison of derived and measured RRCs shows that this algorithm is suitable for processing ground truth data for the purposes of remote data calibration. The second part of this work consists of the evaluation of the predictive visibility model (PVM). The simulated three-dimensional values of optical properties are compared with the measured ones. Preliminary results of comparison are encouraging and show that the PVM can qualitatively predict the evolution of inherent optical properties in littoral waters.
Dynamic Bayesian network modeling for longitudinal brain morphometry
Chen, Rong; Resnick, Susan M; Davatzikos, Christos; Herskovits, Edward H
2011-01-01
Identifying interactions among brain regions from structural magnetic-resonance images presents one of the major challenges in computational neuroanatomy. We propose a Bayesian data-mining approach to the detection of longitudinal morphological changes in the human brain. Our method uses a dynamic Bayesian network to represent evolving inter-regional dependencies. The major advantage of dynamic Bayesian network modeling is that it can represent complicated interactions among temporal processes. We validated our approach by analyzing a simulated atrophy study, and found that this approach requires only a small number of samples to detect the ground-truth temporal model. We further applied dynamic Bayesian network modeling to a longitudinal study of normal aging and mild cognitive impairment — the Baltimore Longitudinal Study of Aging. We found that interactions among regional volume-change rates for the mild cognitive impairment group are different from those for the normal-aging group. PMID:21963916
Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick
2015-08-01
Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.
Meiburger, Kristen M; Molinari, Filippo; Wong, Justin; Aguilar, Luis; Gallo, Diego; Steinman, David A; Morbiducci, Umberto
2016-07-01
The common carotid artery intima-media thickness (IMT) is widely accepted and used as an indicator of atherosclerosis. Recent studies, however, have found that the irregularity of the IMT along the carotid artery wall has a stronger correlation with atherosclerosis than the IMT itself. We set out to validate IMT variability (IMTV), a parameter defined to assess IMT irregularities along the wall. In particular, we analyzed whether or not manual segmentations of the lumen-intima and media-adventitia can be considered reliable in calculation of the IMTV parameter. To do this, we used a total of 60 simulated ultrasound images with a priori IMT and IMTV values. The images, simulated using the Fast And Mechanistic Ultrasound Simulation software, presented five different morphologies, four nominal IMT values and three different levels of variability along the carotid artery wall (no variability, small variability and large variability). Three experts traced the lumen-intima (LI) and media-adventitia (MA) profiles, and two automated algorithms were employed to obtain the LI and MA profiles. One expert also re-traced the LI and MA profiles to test intra-reader variability. The average IMTV measurements of the profiles used to simulate the longitudinal B-mode images were 0.002 ± 0.002, 0.149 ± 0.035 and 0.286 ± 0.068 mm for the cases of no variability, small variability and large variability, respectively. The IMTV measurements of one of the automated algorithms were statistically similar (p > 0.05, Wilcoxon signed rank) when considering small and large variability, but non-significant when considering no variability (p < 0.05, Wilcoxon signed rank). The second automated algorithm resulted in statistically similar values in the small variability case. Two readers' manual tracings, however, produced IMTV measurements with a statistically significant difference considering all three variability levels, whereas the third reader found a statistically significant difference in both the no variability and large variability cases. Moreover, the error range between the reader and automatic IMTV values was approximately 0.15 mm, which is on the same order of small IMTV values, indicating that manual and automatic IMTV readings should be not used interchangeably in clinical practice. On the basis of our findings, we conclude that expert manual tracings should not be considered reliable in IMTV measurement and, therefore, should not be trusted as ground truth. On the other hand, our automated algorithm was found to be more reliable, indicating how automated techniques could therefore foster analysis of the carotid artery intima-media thickness irregularity. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.
2014-01-01
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less
Dsm Based Orientation of Large Stereo Satellite Image Blocks
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Reinartz, P.
2012-07-01
High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.
NASA Astrophysics Data System (ADS)
Trunk, Laura; Bernard, Alain
2008-12-01
A two-channel or split-window algorithm designed to correct for atmospheric conditions was applied to thermal images taken by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) of Lake Yugama on Kusatsu-Shirane volcano in Japan in order to measure the temperature of its crater lake. These temperature calculations were validated using lake water temperatures that were collected on the ground. Overall, the agreement between the temperatures calculated using the split-window method and ground truth is quite good, typically ± 1.5 °C for cloud-free images. Data from fieldwork undertaken in the summer of 2004 at Kusatsu-Shirane allow a comparison of ground-truth data with the radiant temperatures measured using ASTER imagery. Further images were analyzed of Ruapehu, Poás, Kawah Ijen, and Copahué volcanoes to acquire time-series of lake temperatures. A total of 64 images of these 4 volcanoes covering a wide range of geographical locations and climates were analyzed. Results of the split-window algorithm applied to ASTER images are reliable for monitoring thermal changes in active volcanic lakes. These temperature data, when considered in conjunction with traditional volcano monitoring techniques, lead to a better understanding of whether and how thermal changes in crater lakes aid in eruption forecasting.
Bernatowicz, K; Keall, P; Mishra, P; Knopf, A; Lomax, A; Kipritidis, J
2015-01-01
Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) "conventional" 4D CT that uses a constant imaging and couch-shift frequency, (ii) "beam paused" 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) "respiratory-gated" 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm(3) spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10(-19)). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%-1.4%), false positives (4.0%-2.6%), and false negatives (2.7%-1.3%). These percentage reductions correspond to gating reducing image artifacts by 24-90 cm(3) of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm(3) of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
NASA Technical Reports Server (NTRS)
Rust, W. D.; Macgorman, D. R.
1985-01-01
During FY-85, Researchers conducted a field program and analyzed data. The field program incorporated coordinated measurements made with a NASA U2. Results include the following: (1) ground truth measurements of lightning for comparison with those obtained by the U2; (2) analysis of dual-Doppler radar and dual-VHF lightning mapping data from a supercell storm; (3) analysis of synoptic conditions during three simultaneous storm systems on 13 May 1983 when unusually large numbers of positive cloud-to-ground (+CG) flashes occurred; (4) analysis of extremely low frequency (ELF) wave forms; and (5) an assessment of a cloud -ground strike location system using a combination of mobile laboratory and fixed-base TV video data.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
A 4D biomechanical lung phantom for joint segmentation/registration evaluation
NASA Astrophysics Data System (ADS)
Markel, Daniel; Levesque, Ives; Larkin, Joe; Léger, Pierre; El Naqa, Issam
2016-10-01
At present, there exists few openly available methods for evaluation of simultaneous segmentation and registration algorithms. These methods allow for a combination of both techniques to track the tumor in complex settings such as adaptive radiotherapy. We have produced a quality assurance platform for evaluating this specific subset of algorithms using a preserved porcine lung in such that it is multi-modality compatible: positron emission tomography (PET), computer tomography (CT) and magnetic resonance imaging (MRI). A computer controlled respirator was constructed to pneumatically manipulate the lungs in order to replicate human breathing traces. A registration ground truth was provided using an in-house bifurcation tracking pipeline. Segmentation ground truth was provided by synthetic multi-compartment lesions to simulate biologically active tumor, background tissue and a necrotic core. The bifurcation tracking pipeline results were compared to digital deformations and used to evaluate three registration algorithms, Diffeomorphic demons, fast-symmetric forces demons and MiMVista’s deformable registration tool. Three segmentation algorithms the Chan Vese level sets method, a Hybrid technique and the multi-valued level sets algorithm. The respirator was able to replicate three seperate breathing traces with a mean accuracy of 2-2.2%. Bifurcation tracking error was found to be sub-voxel when using human CT data for displacements up to 6.5 cm and approximately 1.5 voxel widths for displacements up to 3.5 cm for the porcine lungs. For the fast-symmetric, diffeomorphic and MiMvista registration algorithms, mean geometric errors were found to be 0.430+/- 0.001 , 0.416+/- 0.001 and 0.605+/- 0.002 voxels widths respectively using the vector field differences and 0.4+/- 0.2 , 0.4+/- 0.2 and 0.6+/- 0.2 voxel widths using the bifurcation tracking pipeline. The proposed phantom was found sufficient for accurate evaluation of registration and segmentation algorithms. The use of automatically generated anatomical landmarks proposed can eliminate the time and potential innacuracy of manual landmark selection using expert observers.
NASA Astrophysics Data System (ADS)
Cenci, Luca; Boni, Giorgio; Pulvirenti, Luca; Gabellani, Simone; Gardella, Fabio; Squicciarino, Giuseppe; Pierdicca, Nazzareno; Benedetto, Catia
2016-04-01
In a reservoir, water level monitoring is important for emergency management purposes. This information can be used to estimate the degree of filling of the water body, thus helping decision makers in flood control operations. Furthermore, if assimilated in hydrological models and coupled with rainfall forecasts, this information can be used for flood forecast and early warning. In many cases, water level is not known (e.g. data-scarce environments), or not shared by operators. Remote sensing may allow overcoming these limitations, enabling its estimation. The objective of this work is to present the Shoreline to Height (S2H) algorithm, developed to retrieve the height of the water stored in reservoirs from satellite images. To this aim, some auxiliary data are needed: a DEM and the maximum/minimum height that can be reached by the water. In data-scarce environments, these information can be easily obtained on the Internet (e.g. free, worldwide DEM and design data for artificial reservoirs). S2H was tested with different satellite data, both optical and SAR (Landsat and Cosmo SkyMed®-CSK®) in order to assess the impact of different sensors on the final estimates. The study area was the Place-Moulin Lake (Valle d'Aosta-VdA, Italy), where it is present a monitoring network that can provide reliable ground-truths for validating the algorithm and assessing its accuracy. When the algorithm was developed, it was assumed to be in absence of any "official"-auxiliary data. Therefore, two DEMs (SRTM 1 arc-second and ASTER GDEM) were used to evaluate their performances. The maximum/minimum water height values were found on the website of VdA Region. The S2H is based on three steps: i) satellite data preprocessing (Landsat: atmospheric correction; CSK®: geocoding and speckle filtering); ii) water mask generation (using a thresholding and region growing algorithm) and shoreline extraction; iii) retrieval of the shoreline height according to the reference DEMs (adopting a statistical approach). The algorithm was tested for different water heights and results were compared against ground-truths. Findings showed that the combination CSK®-SRTM provided more reliable results. It was also found that the overall quality of the estimates increases as the water height increases, reaching an accuracy up to some centimetres. This result is particularly interesting for flood control applications, where it is important to be accurate when the reservoir's degree of filling is high. The potentialities of S2H for operational hydrology purposes were tested in a real-case simulation, in which the river discharge's prediction downstream of the dam was needed for flood risk management purposes. The water height value retrieved with S2H was assimilated within a semi-distributed, event-based, hydrological model (DRiFt) by using a simple direct insertion algorithm. DRiFt is usually run in operative way on the reservoir by using ground-truths as input data. The result of the data assimilation experiment was compared with the "real", operative run of the model. Findings showed a high agreement between the two simulations, proving the utility/quality of the S2H algorithm. "Project carried out using CSK® Products, © of the Italian Space Agency (ASI), delivered under a license to use by ASI."
ERTS-1 imagery and native plant distributions
NASA Technical Reports Server (NTRS)
Musick, H. B.; Mcginnies, W.; Haase, E.; Lepley, L. K.
1974-01-01
A method is developed for using ERTS spectral signature data to determine plant community distribution and phenology without resolving individual plants. An Exotech ERTS radiometer was used near ground level to obtain spectral signatures for a desert plant community, including two shrub species, ground covered with live annuals in April and dead ones in June, and bare ground. It is shown that comparisons of scene types can be made when spectral signatures are expressed as a ratio of red reflectivity to IR reflectivity or when they are plotted as red reflectivity vs. IR reflectivity, in which case the signature clusters of each component are more distinct. A method for correcting and converting the ERTS radiance values to reflectivity values for comparison with ground truth data is appended.
MO-DE-207A-06: ECG-Gated CT Reconstruction for a C-Arm Inverse Geometry X-Ray System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slagowski, JM; Dunkerley, DAP
2016-06-15
Purpose: To obtain ECG-gated CT images from truncated projection data acquired with a C-arm based inverse geometry fluoroscopy system, for the purpose of cardiac chamber mapping in interventional procedures. Methods: Scanning-beam digital x-ray (SBDX) is an inverse geometry fluoroscopy system with a scanned multisource x-ray tube and a photon-counting detector mounted to a C-arm. In the proposed method, SBDX short-scan rotational acquisition is performed followed by inverse geometry CT (IGCT) reconstruction and segmentation of contrast-enhanced objects. The prior image constrained compressed sensing (PICCS) framework was adapted for IGCT reconstruction to mitigate artifacts arising from data truncation and angular undersampling duemore » to cardiac gating. The performance of the reconstruction algorithm was evaluated in numerical simulations of truncated and non-truncated thorax phantoms containing a dynamic ellipsoid to represent a moving cardiac chamber. The eccentricity of the ellipsoid was varied at frequencies from 1–1.5 Hz. Projection data were retrospectively sorted into 13 cardiac phases. Each phase was reconstructed using IGCT-PICCS, with a nongated gridded FBP (gFBP) prior image. Surface accuracy was determined using Dice similarity coefficient and a histogram of the point distances between the segmented surface and ground truth surface. Results: The gated IGCT-PICCS algorithm improved surface accuracy and reduced streaking and truncation artifacts when compared to nongated gFBP. For the non-truncated thorax with 1.25 Hz motion, 99% of segmented surface points were within 0.3 mm of the 15 mm diameter ground truth ellipse, versus 1.0 mm for gFBP. For the truncated thorax phantom with a 40 mm diameter ellipse, IGCT-PICCS surface accuracy measured 0.3 mm versus 7.8 mm for gFBP. Dice similarity coefficient was 0.99–1.00 (IGCT-PICCS) versus 0.63–0.75 (gFBP) for intensity-based segmentation thresholds ranging from 25–75% maximum contrast. Conclusions: The PICCS algorithm was successfully applied to reconstruct truncated IGCT projection data with angular undersampling resulting from simulated cardiac gating. Research supported by the National Heart, Lung, and Blood Institute of the NIH under award number R01HL084022. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.« less
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-01-01
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762
Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2016-01-01
Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor’s uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027
Cell Membrane Tracking in Living Brain Tissue Using Differential Interference Contrast Microscopy.
Lee, John; Kolb, Ilya; Forest, Craig R; Rozell, Christopher J
2018-04-01
Differential interference contrast (DIC) microscopy is widely used for observing unstained biological samples that are otherwise optically transparent. Combining this optical technique with machine vision could enable the automation of many life science experiments; however, identifying relevant features under DIC is challenging. In particular, precise tracking of cell boundaries in a thick ( ) slice of tissue has not previously been accomplished. We present a novel deconvolution algorithm that achieves the state-of-the-art performance at identifying and tracking these membrane locations. Our proposed algorithm is formulated as a regularized least squares optimization that incorporates a filtering mechanism to handle organic tissue interference and a robust edge-sparsity regularizer that integrates dynamic edge tracking capabilities. As a secondary contribution, this paper also describes new community infrastructure in the form of a MATLAB toolbox for accurately simulating DIC microscopy images of in vitro brain slices. Building on existing DIC optics modeling, our simulation framework additionally contributes an accurate representation of interference from organic tissue, neuronal cell-shapes, and tissue motion due to the action of the pipette. This simulator allows us to better understand the image statistics (to improve algorithms), as well as quantitatively test cell segmentation and tracking algorithms in scenarios, where ground truth data is fully known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samluk, Jesse P.; Geiger, Cathleen A.; Weiss, Chester J.
In this article we explore simulated responses of electromagnetic (EM) signals relative to in situ field surveys and quantify the effects that different values of conductivity in sea ice have on the EM fields. We compute EM responses of ice types with a three-dimensional (3-D) finite-volume discretization of Maxwell's equations and present 2-D sliced visualizations of their associated EM fields at discrete frequencies. Several interesting observations result: First, since the simulator computes the fields everywhere, each gridcell acts as a receiver within the model volume, and captures the complete, coupled interactions between air, snow, sea ice and sea water asmore » a function of their conductivity; second, visualizations demonstrate how 1-D approximations near deformed ice features are violated. But the most important new finding is that changes in conductivity affect EM field response by modifying the magnitude and spatial patterns (i.e. footprint size and shape) of current density and magnetic fields. These effects are demonstrated through a visual feature we define as 'null lines'. Null line shape is affected by changes in conductivity near material boundaries as well as transmitter location. Our results encourage the use of null lines as a planning tool for better ground-truth field measurements near deformed ice types.« less
Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2016-01-26
Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process.
Samluk, Jesse P.; Geiger, Cathleen A.; Weiss, Chester J.; ...
2015-10-01
In this article we explore simulated responses of electromagnetic (EM) signals relative to in situ field surveys and quantify the effects that different values of conductivity in sea ice have on the EM fields. We compute EM responses of ice types with a three-dimensional (3-D) finite-volume discretization of Maxwell's equations and present 2-D sliced visualizations of their associated EM fields at discrete frequencies. Several interesting observations result: First, since the simulator computes the fields everywhere, each gridcell acts as a receiver within the model volume, and captures the complete, coupled interactions between air, snow, sea ice and sea water asmore » a function of their conductivity; second, visualizations demonstrate how 1-D approximations near deformed ice features are violated. But the most important new finding is that changes in conductivity affect EM field response by modifying the magnitude and spatial patterns (i.e. footprint size and shape) of current density and magnetic fields. These effects are demonstrated through a visual feature we define as 'null lines'. Null line shape is affected by changes in conductivity near material boundaries as well as transmitter location. Our results encourage the use of null lines as a planning tool for better ground-truth field measurements near deformed ice types.« less
Pilia, Nicolas; Schulze, Walther H. W.; Dössel, Olaf
2017-01-01
The most important ECG marker for the diagnosis of ischemia or infarction is a change in the ST segment. Baseline wander is a typical artifact that corrupts the recorded ECG and can hinder the correct diagnosis of such diseases. For the purpose of finding the best suited filter for the removal of baseline wander, the ground truth about the ST change prior to the corrupting artifact and the subsequent filtering process is needed. In order to create the desired reference, we used a large simulation study that allowed us to represent the ischemic heart at a multiscale level from the cardiac myocyte to the surface ECG. We also created a realistic model of baseline wander to evaluate five filtering techniques commonly used in literature. In the simulation study, we included a total of 5.5 million signals coming from 765 electrophysiological setups. We found that the best performing method was the wavelet-based baseline cancellation. However, for medical applications, the Butterworth high-pass filter is the better choice because it is computationally cheap and almost as accurate. Even though all methods modify the ST segment up to some extent, they were all proved to be better than leaving baseline wander unfiltered. PMID:28373893
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-10-14
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.
Forest statistics for Arkansas counties - 1979
Renewable Resources Evaluation Research Work Unit
1979-01-01
This report tabulates information from a new forest survey of Arkansas completed in 1979 by the Renewable Resources Evaluation Research Unit of the Southern Forest Experiment Station. Forest area was estimated from aerial photos with an adjustment for ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid...
Active microwave water equivalence
NASA Technical Reports Server (NTRS)
Boyne, H. S.; Ellerbruch, D. A.
1980-01-01
Measurements of water equivalence using an active FM-CW microwave system were conducted over the past three years at various sites in Colorado, Wyoming, and California. The measurement method is described. Measurements of water equivalence and stratigraphy are compared with ground truth. A comparison of microwave, federal sampler, and snow pillow measurements at three sites in Colorado is described.
Medium Spatial Resolution Satellite Characterization
NASA Technical Reports Server (NTRS)
Stensaas, Greg
2007-01-01
This project provides characterization and calibration of aerial and satellite systems in support of quality acquisition and understanding of remote sensing data, and verifies and validates the associated data products with respect to ground and and atmospheric truth so that accurate value-added science can be performed. The project also provides assessment of new remote sensing technologies.
EREP geothermal. [northern California
NASA Technical Reports Server (NTRS)
Johnston, E. W. (Principal Investigator); Dunklee, A. L.; Wychgram, D. C.
1974-01-01
The author has identified the following significant results. A reasonably good agreement was found for the radiometric temperatures calculated from the ground truth data and the radiometric temperatures measured by the S192 scanner. This study showed that the S192 scanner data could be used to create good thermal images, particularly with the x-5 detector array.
The Use of Narrative Therapy with Clients Diagnosed with Bipolar Disorder
ERIC Educational Resources Information Center
Ngazimbi, Evadne E.; Lambie, Glenn W.; Shillingford, M. Ann
2008-01-01
Clients diagnosed with bipolar disorder often suffer from mood instability, and research suggests that these clients need both counseling services and pharmacotherapy. Narrative therapy is a social constructionist approach grounded on the premise that there is no single "truth"; individuals may create new meanings and retell their stories to…
ERIC Educational Resources Information Center
Staver, John R.
2010-01-01
Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant…
Over the past three decades, a number of researchers in the fields of environmental justice (EJ) and environmental public health have highlighted the existence of regional and local scale differences in exposure to air pollution, as well as calculated health risk and impacts of a...
Urban vacant land typology: A tool for managing urban vacant land
Gunwoo Kim; Patrick A. Miller; David J. Nowak
2018-01-01
A typology of urban vacant land was developed, using Roanoke, Virginia, as the study area. A comprehensive literature review, field measurements and observations, including photographs, and quantitative based approach to assessing vacant land forest structure and values (i-Tree Eco sampling) were utilized, along with aerial photo interpretation, and ground-truthing...
Shakespeare: Finding and Teaching the Comic Vision.
ERIC Educational Resources Information Center
Lasser, Michael L.
1969-01-01
Comedy is the middle ground upon which the absurd and the serious meet. Concerned with illuminating pain, human imperfection, and man's failure to measure up to his own or the world's concept of perfection, comedy provides "an escape, not from truth but from despair." If tragedy says that some ideals are worth dying for, comedy asserts…
USDA-ARS?s Scientific Manuscript database
Soil moisture is an intrinsic state variable that varies considerably in space and time. Although soil moisture is highly variable, repeated measurements of soil moisture at the field or small watershed scale can often reveal certain locations as being temporally stable and representative of the are...
NASA Technical Reports Server (NTRS)
1971-01-01
Revised Skylab spacecraft, experiments, and mission planning information is presented for the Earth Resources Experiment Package (EREP) users. The major hardware elements and the medical, scientific, engineering, technology and earth resources experiments are described. Ground truth measurements and EREP data handling procedures are discussed. The mission profile, flight planning, crew activities, and aircraft support are also outlined.
NASA Astrophysics Data System (ADS)
Willson, D.; Rask, J. C.; George, S. C.; de Leon, P.; Bonaccorsi, R.; Blank, J.; Slocombe, J.; Silburn, K.; Steele, H.; Gargarno, M.; McKay, C. P.
2014-01-01
We conducted simulated Apollo Extravehicular Activity's (EVA) at the 3.45 Ga Australian 'Pilbara Dawn of life' (Western Australia) trail with field and non-field scientists using the University of North Dakota's NDX-1 pressurizable space suit to overview the effectiveness of scientist astronauts employing their field observation skills while looking for stromatolite fossil evidence. Off-world scientist astronauts will be faced with space suit limitations in vision, human sense perception, mobility, dexterity, the space suit fit, time limitations, and the psychological fear of death from accidents, causing physical fatigue reducing field science performance. Finding evidence of visible biosignatures for past life such as stromatolite fossils, on Mars, is a very significant discovery. Our preliminary overview trials showed that when in simulated EVAs, 25% stromatolite fossil evidence is missed with more incorrect identifications compared to ground truth surveys but providing quality characterization descriptions becomes less affected by simulated EVA limitations as the science importance of the features increases. Field scientists focused more on capturing high value characterization detail from the rock features whereas non-field scientists focused more on finding many features. We identified technologies and training to improve off-world field science performance. The data collected is also useful for NASA's "EVA performance and crew health" research program requirements but further work will be required to confirm the conclusions.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
NASA Astrophysics Data System (ADS)
Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.
2017-07-01
Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.
BLOND, a building-level office environment dataset of typical electrical appliances.
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-03-27
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.
Donovan, Terrence J.; Termain, Patricia A.; Henry, Mitchell E.
1979-01-01
The Cement oil field, Oklahoma, was a test site for an experiment designed to evaluate LANDSAT's capability to detect an alteration zone in surface rocks caused by hydrocarbon microseepage. Loss of iron and impregnation of sandstone by carbonate cements and replacement of gypsum by calcite are the major alteration phenomena at Cement. The bedrock alterations are partially masked by unaltered overlying beds, thick soils, and dense natural and cultivated vegetation. Interpreters biased by detailed ground truth were able to map the alteration zone subjectively using a magnified, filtered, and sinusoidally stretched LANDSAT composite image; other interpreters, unbiased by ground truth data, could not duplicate that interpretation. Similar techniques were applied at a secondary test site (Garza oil field, Texas), where similar alterations in surface rocks occur. Enhanced LANDSAT images resolved the alteration zone to a biased interpreter and some individual altered outcrops could be mapped using higher resolution SKYLAB color and conventional black and white aerial photographs suggesting repeat experiments with LANDSAT C and D.
Automatic classification techniques for type of sediment map from multibeam sonar data
NASA Astrophysics Data System (ADS)
Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.
2018-02-01
Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.
Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram
NASA Astrophysics Data System (ADS)
Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad
Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J
2015-01-01
Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2011-01-01
Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publiclymore » available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.« less
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
Application of Landsat Thematic Mapper data for coastal thermal plume analysis at Diablo Canyon
NASA Technical Reports Server (NTRS)
Gibbons, D. E.; Wukelic, G. E.; Leighton, J. P.; Doyle, M. J.
1989-01-01
The possibility of using Landsat Thematic Mapper (TM) thermal data to derive absolute temperature distributions in coastal waters that receive cooling effluent from a power plant is demonstrated. Landsat TM band 6 (thermal) data acquired on June 18, 1986, for the Diablo Canyon power plant in California were compared to ground truth temperatures measured at the same time. Higher-resolution band 5 (reflectance) data were used to locate power plant discharge and intake positions and identify locations of thermal pixels containing only water, no land. Local radiosonde measurements, used in LOWTRAN 6 adjustments for atmospheric effects, produced corrected ocean surface radiances that, when converted to temperatures, gave values within approximately 0.6 C of ground truth. A contour plot was produced that compared power plant plume temperatures with those of the ocean and coastal environment. It is concluded that Landsat can provide good estimates of absolute temperatures of the coastal power plant thermal plume. Moreover, quantitative information on ambient ocean surface temperature conditions (e.g., upwelling) may enhance interpretation of numerical model prediction.
Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images.
Nguyen, Luong; Tosun, Akif Burak; Fine, Jeffrey L; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2017-07-01
Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.
Allen, Y.C.; Wilson, C.A.; Roberts, H.H.; Supan, J.
2005-01-01
Sidescan sonar holds great promise as a tool to quantitatively depict the distribution and extent of benthic habitats in Louisiana's turbid estuaries. In this study, we describe an effective protocol for acoustic sampling in this environment. We also compared three methods of classification in detail: mean-based thresholding, supervised, and unsupervised techniques to classify sidescan imagery into categories of mud and shell. Classification results were compared to ground truth results using quadrat and dredge sampling. Supervised classification gave the best overall result (kappa = 75%) when compared to quadrat results. Classification accuracy was less robust when compared to all dredge samples (kappa = 21-56%), but increased greatly (90-100%) when only dredge samples taken from acoustically homogeneous areas were considered. Sidescan sonar when combined with ground truth sampling at an appropriate scale can be effectively used to establish an accurate substrate base map for both research applications and shellfish management. The sidescan imagery presented here also provides, for the first time, a detailed presentation of oyster habitat patchiness and scale in a productive oyster growing area.
NASA Technical Reports Server (NTRS)
Dennis, T. B. (Principal Investigator)
1980-01-01
The author has identified the following significant results. The most apparent contributors to the problem of poor temporal extension of LIST are the drastic changes in the brightness keys and an inadequate set of AI responses in Phase 3. The brightness trajectories change drastically from Phase 3 to the transition year (TY). Removing brightness channels from the discriminant does not completely correct the lack of extendability. Removing brightness increases the accuracy of the extension from Phase 3 to TY from 57.7 percent to 64.18 percent. The removal of the Al keys increases accuracy to 65.76 percent. Although the latter increase appears insignificant when compared to the first, the removal of only the Al keys increased accuracy to 63.58 percent. Proper weighting of the responses explains 73.8 percent of the ground truth labels but only 56.7 percent of the Al labels. By contrast, the TY responses which were weighted to explain the TY ground truth labels fared equally well, explaining 73.6 percent of those labels and 87.1 percent of the Al labels.
BLOND, a building-level office environment dataset of typical electrical appliances
NASA Astrophysics Data System (ADS)
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-03-01
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.
NASA Astrophysics Data System (ADS)
Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.
2015-08-01
The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.
NASA Astrophysics Data System (ADS)
Oommen, T.; Chatterjee, S.
2017-12-01
NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.
Comparison of thyroid segmentation techniques for 3D ultrasound
NASA Astrophysics Data System (ADS)
Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.
2017-02-01
The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.
BLOND, a building-level office environment dataset of typical electrical appliances
Kriechbaumer, Thomas; Jacobsen, Hans-Arno
2018-01-01
Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of. PMID:29583141
Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.
2016-01-01
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012
Hoffman, R A; Kothari, S; Phan, J H; Wang, M D
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.
A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures.
DeCost, Brian L; Holm, Elizabeth A
2016-12-01
This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM) images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article "Characterizing powder materials using keypoint-based computer vision methods" (B.L. DeCost, E.A. Holm, 2016) [1]. These data are freely available in a Mendeley Data archive "A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures" (B.L. DeCost, E.A. Holm, 2016) located at http://dx.doi.org/10.17632/tj4syyj9mr.1[2] for any academic, educational, or research purposes.
Field calibration and validation of remote-sensing surveys
Pe'eri, Shachak; McLeod, Andy; Lavoie, Paul; Ackerman, Seth D.; Gardner, James; Parrish, Christopher
2013-01-01
The Optical Collection Suite (OCS) is a ground-truth sampling system designed to perform in situ measurements that help calibrate and validate optical remote-sensing and swath-sonar surveys for mapping and monitoring coastal ecosystems and ocean planning. The OCS system enables researchers to collect underwater imagery with real-time feedback, measure the spectral response, and quantify the water clarity with simple and relatively inexpensive instruments that can be hand-deployed from a small vessel. This article reviews the design and performance of the system, based on operational and logistical considerations, as well as the data requirements to support a number of coastal science and management projects. The OCS system has been operational since 2009 and has been used in several ground-truth missions that overlapped with airborne lidar bathymetry (ALB), hyperspectral imagery (HSI), and swath-sonar bathymetric surveys in the Gulf of Maine, southwest Alaska, and the US Virgin Islands (USVI). Research projects that have used the system include a comparison of backscatter intensity derived from acoustic (multibeam/interferometric sonars) versus active optical (ALB) sensors, ALB bottom detection, and seafloor characterization using HSI and ALB.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections.
Zhang, You; Yin, Fang-Fang; Segars, W Paul; Ren, Lei
2013-12-01
To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy. Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and "ground-truth" onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)∕COMS (±S.D.) between lesions in prior images and "ground-truth" onboard images were 136.11% (±42.76%)∕15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD∕COMS between the lesion in estimated and "ground-truth" onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)∕4.9 mm (±3.0 mm), 96.07% (±31.48%)∕12.1 mm (±3.9 mm) and 11.45% (±9.37%)∕1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)∕4.9 mm (±3.0 mm), 75.98% (±27.21%)∕9.9 mm (±4.0 mm), and 5.22% (±2.12%)∕0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)∕3.2 mm (±2.2 mm), 24.57% (±18.18%)∕2.9 mm (±2.0 mm), and 10.48% (±9.50%)∕1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD∕COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased. The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
NASA Astrophysics Data System (ADS)
Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.
2012-07-01
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, P; Schreibmann, E; Fox, T
2014-06-15
Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less
A Decade Remote Sensing River Bathymetry with the Experimental Advanced Airborne Research LiDAR
NASA Astrophysics Data System (ADS)
Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.; Skinner, K.
2012-12-01
Since 2002, the first generation of the Experimental Advanced Airborne Research LiDAR (EAARL-A) sensor has been deployed for mapping rivers and streams. We present and summarize the results of comparisons between ground truth surveys and bathymetry collected by the EAARL-A sensor in a suite of rivers across the United States. These comparisons include reaches on the Platte River (NE), Boise and Deadwood Rivers (ID), Blue and Colorado Rivers (CO), Klamath and Trinity Rivers (CA), and the Shenandoah River (VA). In addition to diverse channel morphologies (braided, single thread, and meandering) these rivers possess a variety of substrates (sand, gravel, and bedrock) and a wide range of optical characteristics which influence the attenuation and scattering of laser energy through the water column. Root mean square errors between ground truth elevations and those measured by the EAARL-A ranged from 0.15-m in rivers with relatively low turbidity and highly reflective sandy bottoms to over 0.5-m in turbid rivers with less reflective substrates. Mapping accuracy with the EAARL-A has proved challenging in pools where bottom returns are either absent in waveforms or are of such low intensity that they are treated as noise by waveform processing algorithms. Resolving bathymetry in shallow depths where near surface and bottom returns are typically convolved also presents difficulties for waveform processing routines. The results of these evaluations provide an empirical framework to discuss the capabilities and limitations of the EAARL-A sensor as well as previous generations of post-processing software for extracting bathymetry from complex waveforms. These experiences and field studies not only provide benchmarks for the evaluation of the next generation of bathymetric LiDARs for use in river mapping, but also highlight the importance of developing and standardizing more rigorous methods to characterize substrate reflectance and in-situ optical properties at study sites. They also point out the continued necessity of ground truth data for algorithm refinement and survey verification.
2014-01-01
Background Recently it was shown that retinal vessel diameters could be measured using spectral domain optical coherence tomography (OCT). It has also been suggested that retinal vessels manifest different features on spectral domain OCT (SD-OCT) depending on whether they are arteries or veins. Our study was aimed to present a reliable SD-OCT assisted method of differentiating retinal arteries from veins. Methods Patients who underwent circular OCT scans centred at the optic disc using a Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany) were retrospectively reviewed. Individual retinal vessels were identified on infrared reflectance (IR) images and given unique labels for subsequent grading. Vessel types (artery, vein or uncertain) assessed by IR and/or fluorescein angiography (FA) were referenced as ground truth. From OCT, presence/absence of the hyperreflective lower border reflectivity feature was assessed. Presence of this feature was considered indicative for retinal arteries and compared with the ground truth. Results A total of 452 vessels from 26 eyes of 18 patients were labelled and 398 with documented vessel type (302 by IR and 96 by FA only) were included in the study. Using SD-OCT, 338 vessels were assigned a final grade, of which, 86.4% (292 vessels) were classified correctly. Forty three vessels (15 arteries and 28 veins) that IR failed to differentiate were correctly classified by SD-OCT. When using only IR based ground truth for vessel type the SD-OCT based classification approach reached a sensitivity of 0.8758/0.9297, and a specificity of 0.9297/0.8758 for arteries/veins, respectively. Conclusion Our method was able to classify retinal arteries and veins with a commercially available SD-OCT alone, and achieved high classification performance. Paired with OCT based vessel measurements, our study has expanded the potential clinical implication of SD-OCT in evaluation of a variety of retinal and systemic vascular diseases. PMID:24884611
Energy accounting and optimization for mobile systems
NASA Astrophysics Data System (ADS)
Dong, Mian
Energy accounting determines how much a software process contributes to the total system energy consumption. It is the foundation for evaluating software and has been widely used by operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track every hardware use by software in a heterogeneous multi-core system like modern smartphones and tablets. In this thesis, we provide the ground truth for energy accounting based on multi-player game theory and offer the first evaluation of existing energy accounting policies, revealing their important flaws. The proposed ground truth is based on Shapley value, a single value solution to multi-player games of which four axiomatic properties are natural and self-evident to energy accounting. To obtain the Shapley value-based ground truth, one only needs to know if a process is active during the time under question and the system energy consumption during the same time. We further provide a utility optimization formulation of energy management and show, surprisingly, that energy accounting does not matter for existing energy management solutions that control the energy use of a process by giving it an energy budget, or budget based energy management (BEM). We show an optimal energy management (OEM) framework can always outperform BEM. While OEM does not require any form of energy accounting, it is related to Shapley value in that both require the system energy consumption for all possible combination of processes under question. We provide a novel system solution that meet this requirement by acquiring system energy consumption in situ for an OS scheduler period, i.e.,10 ms. We report a prototype implementation of both Shapley value-based energy accounting and OEM based scheduling. Using this prototype and smartphone workload, we experimentally demonstrate how erroneous existing energy accounting policies can be, show that existing BEM solutions are unnecessarily complicated yet underperforming by 20% compared to OEM.
Ground truth and detection threshold from WWII naval clean-up in Denmark
NASA Astrophysics Data System (ADS)
Larsen, Tine B.; Dahl-Jensen, Trine; Voss, Peter
2013-04-01
The sea bed below the Danish territorial waters is still littered with unexploded mines and other ammunition from World War II. The mines were air dropped by the RAF and the positions of the mines are unknown. As the mines still pose a potential threat to fishery and other marine activities, the Admiral Danish Fleet under the Danish Navy searches for the mines and destroy them by detonation, where they are found. The largest mines destroyed in this manner in 2012 are equivalent to 800 kg TNT each. The Seismological Service at the National Geological Survey of Denmark and Greenland is notified by the navy when ammunition in excess of 100 kg TNT is detonated. The notifications include information about position, detonation time and the estimated amount of explosives. The larger explosions are clearly registered not only on the Danish seismographs, but also on seismographs in the neighbouring countries. This includes the large seismograph arrays in Norway, Sweden, and Finland. Until recently the information from the Danish navy was only utilized to rid the Danish earthquake catalogue of explosions. But the high quality information provided by the navy enables us to use these ground truth events to assess the quality of our earthquake catalogue. The mines are scattered though out the Danish territorial waters, thus we can use the explosions to test the accuracy of the determined epicentres in all parts of the country. E.g. a detonation of 135 kg in Begstrup Vig in the central part of Denmark was located using Danish, Norwegian and Swedish stations with an accuracy of less than 2 km from ground truth. A systematic study of the explosions will sharpen our understanding of the seismicity in Denmark, and result in a more detailed understanding of the detection threshold. Furthermore the study will shed light on the sensitivity of the network to various seismograph outages.
Stephens, David; Diesing, Markus
2014-01-01
Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.
TU-F-17A-03: A 4D Lung Phantom for Coupled Registration/Segmentation Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; El Naqa, I; Levesque, I
2014-06-15
Purpose: Coupling the processes of segmentation and registration (regmentation) is a recent development that allows improved efficiency and accuracy for both steps and may improve the clinical feasibility of online adaptive radiotherapy. Presented is a multimodality animal tissue model designed specifically to provide a ground truth to simultaneously evaluate segmentation and registration errors during respiratory motion. Methods: Tumor surrogates were constructed from vacuum sealed hydrated natural sea sponges with catheters used for the injection of PET radiotracer. These contained two compartments allowing for two concentrations of radiotracer mimicking both tumor and background signals. The lungs were inflated to different volumesmore » using an air pump and flow valve and scanned using PET/CT and MRI. Anatomical landmarks were used to evaluate the registration accuracy using an automated bifurcation tracking pipeline for reproducibility. The bifurcation tracking accuracy was assessed using virtual deformations of 2.6 cm, 5.2 cm and 7.8 cm of a CT scan of a corresponding human thorax. Bifurcations were detected in the deformed dataset and compared to known deformation coordinates for 76 points. Results: The bifurcation tracking accuracy was found to have a mean error of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior axes using a 1×1×5 mm3 resolution after the CT volume was deformed 7.8 cm. The tumor surrogates provided a segmentation ground truth after being registered to the phantom image. Conclusion: A swine lung model in conjunction with vacuum sealed sponges and a bifurcation tracking algorithm is presented that is MRI, PET and CT compatible and anatomically and kinetically realistic. Corresponding software for tracking anatomical landmarks within the phantom shows sub-voxel accuracy. Vacuum sealed sponges provide realistic tumor surrogate with a known boundary. A ground truth with minimal uncertainty is thus realized that can be used for comparing the performance of registration and segmentation algorithms.« less
Law, Bradley; Caccamo, Gabriele; Roe, Paul; Truskinger, Anthony; Brassil, Traecey; Gonsalves, Leroy; McConville, Anna; Stanton, Matthew
2017-09-01
Species distribution models have great potential to efficiently guide management for threatened species, especially for those that are rare or cryptic. We used MaxEnt to develop a regional-scale model for the koala Phascolarctos cinereus at a resolution (250 m) that could be used to guide management. To ensure the model was fit for purpose, we placed emphasis on validating the model using independently-collected field data. We reduced substantial spatial clustering of records in coastal urban areas using a 2-km spatial filter and by modeling separately two subregions separated by the 500-m elevational contour. A bias file was prepared that accounted for variable survey effort. Frequency of wildfire, soil type, floristics and elevation had the highest relative contribution to the model, while a number of other variables made minor contributions. The model was effective in discriminating different habitat suitability classes when compared with koala records not used in modeling. We validated the MaxEnt model at 65 ground-truth sites using independent data on koala occupancy (acoustic sampling) and habitat quality (browse tree availability). Koala bellows ( n = 276) were analyzed in an occupancy modeling framework, while site habitat quality was indexed based on browse trees. Field validation demonstrated a linear increase in koala occupancy with higher modeled habitat suitability at ground-truth sites. Similarly, a site habitat quality index at ground-truth sites was correlated positively with modeled habitat suitability. The MaxEnt model provided a better fit to estimated koala occupancy than the site-based habitat quality index, probably because many variables were considered simultaneously by the model rather than just browse species. The positive relationship of the model with both site occupancy and habitat quality indicates that the model is fit for application at relevant management scales. Field-validated models of similar resolution would assist in guiding management of conservation-dependent species.
Identifying retail food stores to evaluate the food environment.
Hosler, Akiko S; Dharssi, Aliza
2010-07-01
The availability of food stores is the most frequently used measure of the food environment, but identifying them poses a technical challenge. This study evaluated eight administrative lists of retailers for identifying food stores in an urban community. Lists of inspected food stores (IFS), cigarette retailers, liquor licenses, lottery retailers, gasoline retailers, farmers' markets, and authorized WIC (Program for Women, Infants, and Children) and Supplemental Nutrition Assistance Program (SNAP) retailers for Albany NY were obtained from government agencies. Sensitivity and positive predictive value (PPV) were assessed, using ground-truthing as the validation measure. Stores were also grouped by the number of lists they were documented on, and the proportion of food stores in each group was obtained. Data were collected and analyzed in 2009. A total of 166 stores, including four from ground-truthing, were identified. Forty-three stores were disqualified, as a result of having no targeted foods (n=17); being in the access-restricted area of a building (n=15); and being out of business (n=11). Sensitivity was highest in IFS (87.0%), followed by the cigarette retailers' list (76.4%). PPV was highest in WIC and farmers' markets lists (100%), followed by SNAP (97.8%). None of the lists had both sensitivity and PPV greater than 90%. All stores that were listed by four or more lists were food stores. The proportion of food stores was lowest (33.3%) for stores listed by only one list. Individual lists had limited utility for identifying food stores, but when they were combined, the likelihood of a retail store being a food store could be predicted by the number of lists the store was documented on. This information can be used to increase the efficiency of ground-truthing. Copyright 2010 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada
NASA Technical Reports Server (NTRS)
Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea
1992-01-01
Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.
Progress in utilization of a mobile laboratory for making storm electricity measurements
NASA Technical Reports Server (NTRS)
Rust, W. David
1988-01-01
A mobile atmospheric science laboratory has been used to intercept and track storms on the Great Plains region of the U.S., with the intention of combining the data obtained with those from Doppler and conventional radars, NASA U-2 aircraft overflights, balloon soundings, and fixed-base storm electricity measurements. The mobile lab has proven to be valuable in the gathering of ground truth verifications for the two commercially operated lightning ground-strike locating systems. Data acquisition has recently been expanded by means of mobile ballooning before and during storms.
NASA Technical Reports Server (NTRS)
1981-01-01
General information and administrative instructions are provided for individuals gathering ground truth data to support research and development techniques for estimating crop acreage and production by remote sensing by satellite. Procedures are given for personal safety with regards to organophosphorus insecticides, for conducting interviews for periodic observations, for coding the crops identified and their growth stages, and for selecting sites for placing rain gages. Forms are included for those citizens agreeing to monitor the gages and record the rainfall. Segment selection is also considered.
Knieps, Melanie; Granhag, Pär A; Vrij, Aldert
2014-01-01
Prospection is thinking about possible future states of the world. Commitment to perform a future action-commonly referred to as intention-is a specific type of prospection. This knowledge is relevant when trying to assess whether a stated intention is a lie or the truth. An important observation is that thinking of, and committing to, future actions often evoke vivid and detailed mental images. One factor that affects how specific a person experiences these simulations is location-familiarity. The purpose of this study was to examine to what extent location-familiarity moderates how liars and truth tellers describe a mental image in an investigative interview. Liars were instructed to plan a criminal act and truth tellers were instructed to plan a non-criminal act. Before they could carry out these acts, the participants were intercepted and interviewed about the mental images they may have had experienced in this planning phase. Truth tellers told the truth whereas liars used a cover story to mask their criminal intentions. As predicted, the results showed that the truth tellers reported a mental image significantly more often than the liars. If a mental image was reported, the content of the descriptions did not differ between liars and truth tellers. In a post interview questionnaire, the participants rated the vividness (i.e., content and clarity) of their mental images. The ratings revealed that the truth tellers had experienced their mental images more vividly during the planning phase than the liars. In conclusion, this study indicates that both prototypical and specific representations play a role in prospection. Although location-familiarity did not moderate how liars and truth tellers describe their mental images of the future, this study allows some interesting insights into human future thinking. How these findings can be helpful for distinguishing between true and false intentions will be discussed.
Knieps, Melanie; Granhag, Pär A.; Vrij, Aldert
2014-01-01
Prospection is thinking about possible future states of the world. Commitment to perform a future action—commonly referred to as intention—is a specific type of prospection. This knowledge is relevant when trying to assess whether a stated intention is a lie or the truth. An important observation is that thinking of, and committing to, future actions often evoke vivid and detailed mental images. One factor that affects how specific a person experiences these simulations is location-familiarity. The purpose of this study was to examine to what extent location-familiarity moderates how liars and truth tellers describe a mental image in an investigative interview. Liars were instructed to plan a criminal act and truth tellers were instructed to plan a non-criminal act. Before they could carry out these acts, the participants were intercepted and interviewed about the mental images they may have had experienced in this planning phase. Truth tellers told the truth whereas liars used a cover story to mask their criminal intentions. As predicted, the results showed that the truth tellers reported a mental image significantly more often than the liars. If a mental image was reported, the content of the descriptions did not differ between liars and truth tellers. In a post interview questionnaire, the participants rated the vividness (i.e., content and clarity) of their mental images. The ratings revealed that the truth tellers had experienced their mental images more vividly during the planning phase than the liars. In conclusion, this study indicates that both prototypical and specific representations play a role in prospection. Although location-familiarity did not moderate how liars and truth tellers describe their mental images of the future, this study allows some interesting insights into human future thinking. How these findings can be helpful for distinguishing between true and false intentions will be discussed. PMID:25071648
Impact of Conifer Forest Litter on Microwave Emission at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; O'Neill, Peggy E.; Lang, Roger H.; Cosh, Michael H.; Joseph, Alicia T.; Jackson, Thomas J.
2011-01-01
This study reports on the utilization of microwave modeling, together with ground truth, and L-band (1.4-GHz) brightness temperatures to investigate the passive microwave characteristics of a conifer forest floor. The microwave data were acquired over a natural Virginia Pine forest in Maryland by a ground-based microwave active/passive instrument system in 2008/2009. Ground measurements of the tree biophysical parameters and forest floor characteristics were obtained during the field campaign. The test site consisted of medium-sized evergreen conifers with an average height of 12 m and average diameters at breast height of 12.6 cm. The site is a typical pine forest site in that there is a surface layer of loose debris/needles and an organic transition layer above the mineral soil. In an effort to characterize and model the impact of the surface litter layer, an experiment was conducted on a day with wet soil conditions, which involved removal of the surface litter layer from one half of the test site while keeping the other half undisturbed. The observations showed detectable decrease in emissivity for both polarizations after the surface litter layer was removed. A first-order radiative transfer model of the forest stands including the multilayer nature of the forest floor in conjunction with the ground truth data are used to compute forest emission. The model calculations reproduced the major features of the experimental data over the entire duration, which included the effects of surface litter and ground moisture content on overall emission. Both theory and experimental results confirm that the litter layer increases the observed canopy brightness temperature and obscure the soil emission.
NASA Technical Reports Server (NTRS)
Harman, R.; Blejer, D.
1990-01-01
The requirements and mathematical specifications for the Gamma Ray Observatory (GRO) Dynamics Simulator are presented. The complete simulator system, which consists of the profie subsystem, simulation control and input/output subsystem, truth model subsystem, onboard computer model subsystem, and postprocessor, is described. The simulator will be used to evaluate and test the attitude determination and control models to be used on board GRO under conditions that simulate the expected in-flight environment.
Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models
NASA Technical Reports Server (NTRS)
Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.;
2012-01-01
In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.
Assessing confidence in Pliocene sea surface temperatures to evaluate predictive models
Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling M.; Stoll, Danielle K.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Foley, Kevin M.; Riesselman, Christina R.
2012-01-01
In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.3–3.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history. This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.
Peat Depth Assessment Using Airborne Geophysical Data for Carbon Stock Modelling
NASA Astrophysics Data System (ADS)
Keaney, Antoinette; McKinley, Jennifer; Ruffell, Alastair; Robinson, Martin; Graham, Conor; Hodgson, Jim; Desissa, Mohammednur
2013-04-01
The Kyoto Agreement demands that all signatory countries have an inventory of their carbon stock, plus possible future changes to this store. This is particularly important for Ireland, where some 16% of the surface is covered by peat bog. Estimates of soil carbon stores are a key component of the required annual returns made by the Irish and UK governments to the Intergovernmental Panel on Climate Change. Saturated peat attenuates gamma-radiation from underlying rocks. This effect can be used to estimate the thickness of peat, within certain limits. This project examines this relationship between peat depth and gamma-radiation using airborne geophysical data generated by the Tellus Survey and newly acquired data collected as part of the EU-funded Tellus Border project, together encompassing Northern Ireland and the border area of the Republic of Ireland. Selected peat bog sites are used to ground truth and evaluate the use of airborne geophysical (radiometric and electromagnetic) data and validate modelled estimates of soil carbon, peat volume and depth to bedrock. Data from two test line sites are presented: one in Bundoran, County Donegal and a second line in Sliabh Beagh, County Monaghan. The plane flew over these areas at different times of the year and at a series of different elevations allowing the data to be assessed temporally with different soil/peat saturation levels. On the ground these flight test lines cover varying surface land use zones allowing future extrapolation of data from the sites. This research applies spatial statistical techniques, including uncertainty estimation in geostatistical prediction and simulation, to investigate and model the use of airborne geophysical data to examine the relationship between reduced radioactivity and peat depth. Ground truthing at test line locations and selected peat bog sites involves use of ground penetrating radar, terrestrial LiDAR, peat depth probing, magnetometry, resistivity, handheld gamma-ray spectrometry, moisture content and rainfall monitoring combined with a real-time Differential Global Positioning System (DGPS) to monitor temporal and spatial variability of bog elevations. This research will assist in determining the accuracy and limitations of modelling soil carbon and changes in peat stocks by investigating the attenuation of gamma-radiation from underlying rocks. Tellus Border is supported by the EU INTERREG IVA programme, which is managed by the Special EU Programmes Body in Northern Ireland, the border Region of Ireland and western Scotland. The Tellus project was funded by the Northern Ireland Development of Enterprise Trade and Investment and by the Rural Development Programme through the Northern Ireland Programme for Building Sustainable Prosperity.
NASA Technical Reports Server (NTRS)
1972-01-01
Results are presented of analysis of satellite signal characteristics as influenced by ocean surface roughness and an investigation of sea truth data requirements. The first subject treated is that of postflight waveform reconstruction for the Skylab S-193 radar altimeter. Sea state estimation accuracies are derived based on analytical and hybrid computer simulation techniques. An analysis of near-normal incidence, microwave backscattering from the ocean's surface is accomplished in order to obtain the minimum sea truth data necessary for good agreement between theoretical and experimental scattering results. Sea state bias is examined from the point of view of designing an experiment which will lead to a resolution of the problem. A discussion is given of some deficiencies which were found in the theory underlying the Stilwell technique for spectral measurements.
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
NASA Astrophysics Data System (ADS)
Bonaccorsi, R.; Stoker, C. R.
2006-12-01
The subsurface is the key environment for searching for life on planets lacking surface life. This includes the search for past/present life on Mars where possible subsurface life could exist [1]. The Mars-Analog-Rio-Tinto-Experiment (MARTE) performed a simulation of a Mars robotic drilling at the RT Borehole#7 Site ~6.07m, atop a massive-pyrite deposit from the Iberian Pyritic Belt. The RT site is considered an important analog of Sinus Meridiani on Mars, an ideal model analog for a subsurface Martian setting [2], and a relevant example of deep subsurface microbial community including aerobic and anaerobic chemoautotrophs [4-5]. Searching for microbes or bulk organics of biological origin in a subsurface sample from a planet is a key scientific objective of Robotic drilling missions. During the 2005 Field experiment 28 minicores were robotically handled and subsampled for life detection experiments under anti-contamination protocols. Ground truth included visual observation of cores and lab based Elemental and Isotope Ratios Mass Spectrometry analysis (EA-IRMS) of bulk organics in Hematite and Gohetite-rich gossanized tuffs, gossan and clay layers within 0-6m-depth. C-org and N-tot vary up to four orders of magnitude among the litter (~11Wt%, 0-1cm) and the mineralized (~3Wt%, 1-3cm) layers, and the first 6 m-depth (C-org=0.02-0.38Wt%). Overall, the distribution/ preservation of plant and soil-derived organics (d13C-org = 26 per mil to 24 per mil) is ten times higher (C-org=0.33Wt%) that in hematite-poor clays, or where rootlets are present, than in hematite- rich samples (C-org=<0.01Wt%). This is consistent with ATP assay (Lightning-MVP, Biocontrol) for total biomass in subsurface (Borehole#7 ~6.07m, ~avg. 153RLU) vs. surface soil samples (~1,500-81,449RLU) [5]. However, the in-situ ATP assay failed in detecting presence of roots during the in-situ life detection experiment. Furthermore, cm-sized roots were overlooked during remote observations. Finally, ATP Luminometry provided insights for potential contamination from core-handling and environmental dust loadings on cleaned/sterilized control surfaces (e.g., 6,782-36,243RLU/cm2). Cleanliness/sterility can be maintained by applying a simple sterile protocol under field conditions. Science results from this research will support future Astrobiology driven drilling mission planned on Mars. Specifically, ground truth offers relevant insights to assess strengths and limits of in-situ/remote observations vs. laboratory measurements. Results from this experiment will also aid the debate on advantages/ disadvantages of manned vs. robotic drilling missions on Mars or other planets. [1] Boston et al., 1997; [2] http://marte.arc.nasa.gov; [3] Stoker, C., et al., 2006 AbSciCon, [4] Stoker et al., submitted; [5] Bonaccorsi., et al., 2006 AbSciCon.
Realism without truth: a review of Giere's science without laws and scientific perspectivism.
Hackenberg, Timothy D
2009-05-01
An increasingly popular view among philosophers of science is that of science as action-as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.
REALISM WITHOUT TRUTH: A REVIEW OF GIERE'S SCIENCE WITHOUT LAWS AND SCIENTIFIC PERSPECTIVISM
Hackenberg, Timothy D
2009-01-01
An increasingly popular view among philosophers of science is that of science as action—as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science. PMID:19949495
NASA Astrophysics Data System (ADS)
Negraru, Petru; Golden, Paul
2017-04-01
Long-term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, traveltime and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90 per cent of the detonations have traveltimes indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27 per cent of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modeling through the Naval Research Laboratory Ground to Space atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent traveltime bias. In the end, we show the possible improvement in location using empirically calibrated traveltime and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90 per cent credibility contours by a factor of 2.5.
Coarse Scale In Situ Albedo Observations over Heterogeneous Land Surfaces and Validation Strategy
NASA Astrophysics Data System (ADS)
Xiao, Q.; Wu, X.; Wen, J.; BAI, J., Sr.
2017-12-01
To evaluate and improve the quality of coarse-pixel land surface albedo products, validation with ground measurements of albedo is crucial over the spatially and temporally heterogeneous land surface. The performance of albedo validation depends on the quality of ground-based albedo measurements at a corresponding coarse-pixel scale, which can be conceptualized as the "truth" value of albedo at coarse-pixel scale. The wireless sensor network (WSN) technology provides access to continuously observe on the large pixel scale. Taking the albedo products as an example, this paper was dedicated to the validation of coarse-scale albedo products over heterogeneous surfaces based on the WSN observed data, which is aiming at narrowing down the uncertainty of results caused by the spatial scaling mismatch between satellite and ground measurements over heterogeneous surfaces. The reference value of albedo at coarse-pixel scale can be obtained through an upscaling transform function based on all of the observations for that pixel. We will devote to further improve and develop new method that that are better able to account for the spatio-temporal characteristic of surface albedo in the future. Additionally, how to use the widely distributed single site measurements over the heterogeneous surfaces is also a question to be answered. Keywords: Remote sensing; Albedo; Validation; Wireless sensor network (WSN); Upscaling; Heterogeneous land surface; Albedo truth at coarse-pixel scale
MAX-91: Polarimetric SAR results on Montespertoli site
NASA Technical Reports Server (NTRS)
Baronti, S.; Luciani, S.; Moretti, S.; Paloscia, S.; Schiavon, G.; Sigismondi, S.
1993-01-01
The polarimetric Synthetic Aperture Radar (SAR) is a powerful sensor for high resolution ocean and land mapping and particularly for monitoring hydrological parameters in large watersheds. There is currently much research in progress to assess the SAR operational capability as well as to estimate the accuracy achievable in the measurements of geophysical parameters with the presently available airborne and spaceborne sensors. An important goal of this research is to improve our understanding of the basic mechanisms that control the interaction of electro-magnetic waves with soil and vegetation. This can be done both by developing electromagnetic models and by analyzing statistical relations between backscattering and ground truth data. A systematic investigation, which aims at a better understanding of the information obtainable from the multi-frequency polarimetric SAR to be used in agro-hydrology, is in progress by our groups within the framework of SIR-C/X-SAR Project and has achieved a most significant milestone with the NASA/JPL Aircraft Campaign named MAC-91. Indeed this experiment allowed us to collect a large and meaningful data set including multi-temporal multi-frequency polarimetric SAR measurements and ground truth. This paper presents some significant results obtained over an agricultural flat area within the Montespertoli site, where intensive ground measurements were carried out. The results are critically discussed with special regard to the information associated with polarimetric data.
Taxonomy of USA east coast fishing communities in terms of social vulnerability and resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollnac, Richard B., E-mail: pollnac3@gmail.com; Seara, Tarsila, E-mail: tarsila.seara@noaa.gov; Colburn, Lisa L., E-mail: lisa.l.colburn@noaa.gov
Increased concern with the impacts that changing coastal environments can have on coastal fishing communities led to a recent effort by NOAA Fisheries social scientists to develop a set of indicators of social vulnerability and resilience for the U.S. Southeast and Northeast coastal communities. A goal of the NOAA Fisheries social vulnerability and resilience indicator program is to support time and cost effective use of readily available data in furtherance of both social impact assessments of proposed changes to fishery management regulations and climate change adaptation planning. The use of the indicators to predict the response to change in coastalmore » communities would be enhanced if community level analyses could be grouped effectively. This study examines the usefulness of combining 1130 communities into 35 relevant subgroups by comparing results of a numerical taxonomy with data collected by interview methods, a process herein referred to as “ground-truthing.” The validation of the taxonomic method by the method of ground-truthing indicates that the clusters are adequate to be used to select communities for in-depth research. - Highlights: • We develop a taxonomy of fishing communities based on vulnerability indicators. • We validate the community clusters through the use of surveys (“ground-truthing”). • Clusters differ along important aspects of fishing community vulnerability. • Clustering communities allows for accurate and timely social impact assessments.« less
Jaton, Florian
2017-01-01
This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802
Ground Truth Mineralogy vs. Orbital Observations at the Bagnold Dune Field
NASA Technical Reports Server (NTRS)
Achilles, C. N.; Downs, R. T.; Ming, D. W.; Rampe, E. B.; Morris, R. V.; Treiman, A. H.; Morrison, S. M.; Blake, D. F.; Vaniman, D. T.; Bristow, T. F.
2017-01-01
The Mars Science Laboratory (MSL) rover, Curiosity, is analyzing rock and sediments in Gale crater to provide in situ sedimentological, geochemical, and mineralogical assessments of the crater's geologic history. Curiosity's recent traverse through an active, basaltic eolian deposit, informally named the Bagnold Dunes, provided the opportunity for a multi-instrument investigation of the dune field.
Accuracy assessment of percent canopy cover, cover type, and size class
H. T. Schreuder; S. Bain; R. C. Czaplewski
2003-01-01
Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...
Man and the Biosphere: Ground Truthing Coral Reefs for the St. John Island Biosphere Reserve.
ERIC Educational Resources Information Center
Brody, Michael J.; And Others
Research on the coral species composition of St. John's reefs in the Virgin Islands was conducted through the School for Field Studies (SFS) Coral Reef Ecology course (winter 1984). A cooperative study program based on the United Nations Educational, Scientific, and Cultural Organization's (Unesco) program, Man and the Biosphere, was undertaken by…
Healing the Past through Story
ERIC Educational Resources Information Center
Mullet, Judy H.; Akerson, Nels M. K.; Turman, Allison
2013-01-01
Stories matter, and the stories we tell ourselves matter most. Truth has many layers and narrative helps us makes senses of our multilayered reality. We live a personal narrative that is grounded in our past experience, but embodied in our present. As such, it filters what we see and how we interpret events. Attachment theorists tell us our early…
Data and Network Science for Noisy Heterogeneous Systems
ERIC Educational Resources Information Center
Rider, Andrew Kent
2013-01-01
Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…
Cloud Study Investigators: Using NASA's CERES S'COOL in Problem-Based Learning
ERIC Educational Resources Information Center
Moore, Susan; Popiolkowski, Gary
2011-01-01
1This article describes how, by incorporating NASA's Students' Cloud Observations On-Line (S'COOL) project into a problem-based learning (PBL) activity, middle school students are engaged in authentic scientific research where they observe and record information about clouds and contribute ground truth data to NASA's Clouds and the Earth's…
Forest statistics for Arkansas' delta counties
Richard T. Quick; Mary S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 21 counties comprising the North and South Delta Units of Arkansas (fig. 1). Forest area was estimated from aerial photos with an adjustment for ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid oriented roughly N-S and E-W. At...
Forest statistics for Arkansas' Ouachita counties
T. Richard Quick; Marry S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 10 counties comprising the Quachita Unit of Arkansas (fig. 1). Forest area was estimated from aerial photos with an adjustment of ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid orientated roughly N-S and E-W. At each locations,...
Forest statistics for Arkansas' Ozark counties
T. Richard Quick; Mary S. Hedlund
1979-01-01
These tables were derived from data obtained during a 1978 inventory of 24 counties comprising the Ozark Unit of Arkansas (fib. 1). Forest area was estimated from aerial photos with an adjustment of ground truth at selected locations. Sample plots were systematically established at three-mile intervals using a grid orientated roughly N-S and E-W. At each location,...
Exploring Meaning in Life in the Tel Hai Gifted Children’s Center
ERIC Educational Resources Information Center
Kasler, Jon; Goldfarb-Rivlin, Sima; Levi, Jossef; Elias, Maurice J.
2013-01-01
While high IQ is likely to be an advantage in moral reasoning, it does not guarantee students' putting those morals into practice. A clearly defined sense of purpose grounded in values of social responsibility, exploration of values, and the search for ultimate truths, both personal and collective, is paramount. The Meaning in Life program in…
Modifications to the accuracy assessment analysis routine SPATL to produce an output file
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1978-01-01
The SPATL is an analysis program in the Accuracy Assessment Software System which makes comparisons between ground truth information and dot labeling for an individual segment. In order to facilitate the aggregation cf this information, SPATL was modified to produce a disk output file containing the necessary information about each segment.
No Neutral Ground: Standing by the Values We Prize in Higher Education.
ERIC Educational Resources Information Center
Young, Robert B.
This book is a call to those within higher education to remain clear and consistent about the core values--service, truth, freedom, equality, individuation, justice, and community--that play a critical role in American society. It provides suggestions to help administrators and faculty to incorporate these values into their own practice and…
Margaret Brittingham; Patrick Drohan; Joseph Bishop
2013-01-01
Marcellus shale development is occurring rapidly across Pennsylvania. We conducted a geographic information system (GIS) analysis using available Pennsylvania Department of Environmental Protection permit data, before and after photos, ground-truthing, and fi eld measurements to describe landscape change within the fi rst 3 years of active Marcellus exploration and...
Application of LANDSAT TM images to assess circulation and dispersion in coastal lagoons
NASA Technical Reports Server (NTRS)
Kjerfve, B.; Jensen, J. R.; Magill, K. E.
1986-01-01
The main objectives are formulated around a four pronged work approach, consisting of tasks related to: image processing and analysis of LANDSAT thematic mapping; numerical modeling of circulation and dispersion; hydrographic and spectral radiation field sampling/ground truth data collection; and special efforts to focus the investigation on turbid coastal/estuarine fronts.
Regrounding in Place: Paths to Native American Truths at the Margins
ERIC Educational Resources Information Center
Lucas, Michael
2013-01-01
Margin acts as ground to receive the figure of the text. Margin is initially unreadable, but as suggested by gestalt studies, may be reversed, or regrounded. A humanities course, "Native American Architecture and Place," was created for a polytechnic student population, looking to place as an inroad for access to the margins of a better…
SpinSat Mission Ground Truth Characterization
2014-09-01
launch via the SpaceX Falcon 9 CRS4 mission on 12 Sept 2014 and is to be deployed from the International Space Station (ISS) on 29 Sept. 2014. 2...ISS as part of the soft-stow cargo allotment on the SpaceX Dragon spacecraft launched by the SpaceX Falcon 9 two stage to orbit launch vehicle during
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Mammogram registration: a phantom-based evaluation of compressed breast thickness variation effects.
Richard, Frédéric J P; Bakić, Predrag R; Maidment, Andrew D A
2006-02-01
The temporal comparison of mammograms is complex; a wide variety of factors can cause changes in image appearance. Mammogram registration is proposed as a method to reduce the effects of these changes and potentially to emphasize genuine alterations in breast tissue. Evaluation of such registration techniques is difficult since ground truth regarding breast deformations is not available in clinical mammograms. In this paper, we propose a systematic approach to evaluate sensitivity of registration methods to various types of changes in mammograms using synthetic breast images with known deformations. As a first step, images of the same simulated breasts with various amounts of simulated physical compression have been used to evaluate a previously described nonrigid mammogram registration technique. Registration performance is measured by calculating the average displacement error over a set of evaluation points identified in mammogram pairs. Applying appropriate thickness compensation and using a preferred order of the registered images, we obtained an average displacement error of 1.6 mm for mammograms with compression differences of 1-3 cm. The proposed methodology is applicable to analysis of other sources of mammogram differences and can be extended to the registration of multimodality breast data.
Bio-inspired benchmark generator for extracellular multi-unit recordings
Mondragón-González, Sirenia Lizbeth; Burguière, Eric
2017-01-01
The analysis of multi-unit extracellular recordings of brain activity has led to the development of numerous tools, ranging from signal processing algorithms to electronic devices and applications. Currently, the evaluation and optimisation of these tools are hampered by the lack of ground-truth databases of neural signals. These databases must be parameterisable, easy to generate and bio-inspired, i.e. containing features encountered in real electrophysiological recording sessions. Towards that end, this article introduces an original computational approach to create fully annotated and parameterised benchmark datasets, generated from the summation of three components: neural signals from compartmental models and recorded extracellular spikes, non-stationary slow oscillations, and a variety of different types of artefacts. We present three application examples. (1) We reproduced in-vivo extracellular hippocampal multi-unit recordings from either tetrode or polytrode designs. (2) We simulated recordings in two different experimental conditions: anaesthetised and awake subjects. (3) Last, we also conducted a series of simulations to study the impact of different level of artefacts on extracellular recordings and their influence in the frequency domain. Beyond the results presented here, such a benchmark dataset generator has many applications such as calibration, evaluation and development of both hardware and software architectures. PMID:28233819
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
Fast image-based mitral valve simulation from individualized geometry.
Villard, Pierre-Frederic; Hammer, Peter E; Perrin, Douglas P; Del Nido, Pedro J; Howe, Robert D
2018-04-01
Common surgical procedures on the mitral valve of the heart include modifications to the chordae tendineae. Such interventions are used when there is extensive leaflet prolapse caused by chordae rupture or elongation. Understanding the role of individual chordae tendineae before operating could be helpful to predict whether the mitral valve will be competent at peak systole. Biomechanical modelling and simulation can achieve this goal. We present a method to semi-automatically build a computational model of a mitral valve from micro CT (computed tomography) scans: after manually picking chordae fiducial points, the leaflets are segmented and the boundary conditions as well as the loading conditions are automatically defined. Fast finite element method (FEM) simulation is carried out using Simulation Open Framework Architecture (SOFA) to reproduce leaflet closure at peak systole. We develop three metrics to evaluate simulation results: (i) point-to-surface error with the ground truth reference extracted from the CT image, (ii) coaptation surface area of the leaflets and (iii) an indication of whether the simulated closed leaflets leak. We validate our method on three explanted porcine hearts and show that our model predicts the closed valve surface with point-to-surface error of approximately 1 mm, a reasonable coaptation surface area, and absence of any leak at peak systole (maximum closed pressure). We also evaluate the sensitivity of our model to changes in various parameters (tissue elasticity, mesh accuracy, and the transformation matrix used for CT scan registration). We also measure the influence of the positions of the chordae tendineae on simulation results and show that marginal chordae have a greater influence on the final shape than intermediate chordae. The mitral valve simulation can help the surgeon understand valve behaviour and anticipate the outcome of a procedure. Copyright © 2018 John Wiley & Sons, Ltd.
Comparison of satellite reflectance algorithms for estimating ...
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernatowicz, K., E-mail: kingab@student.ethz.ch; Knopf, A.; Lomax, A.
Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CTmore » can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results: Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10{sup −19}). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%–1.4%), false positives (4.0%–2.6%), and false negatives (2.7%–1.3%). These percentage reductions correspond to gating reducing image artifacts by 24–90 cm{sup 3} of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. Conclusions: For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm{sup 3} of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.« less
Actionability and Simulation: No Representation without Communication
Feldman, Jerome A.
2016-01-01
There remains considerable controversy about how the brain operates. This review focuses on brain activity rather than just structure and on concepts of action and actionability rather than truth conditions. Neural Communication is reviewed as a crucial aspect of neural encoding. Consequently, logical inference is superseded by neural simulation. Some remaining mysteries are discussed. PMID:27725807
Detection Thresholds of Falling Snow from Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail; Johnson, Benjamin T.; Munchak, S. Joseph
2012-01-01
Precipitation, including rain and snow, is a critical part of the Earth's energy and hydrology cycles. Precipitation impacts latent heating profiles locally while global circulation patterns distribute precipitation and energy from the equator to the poles. For the hydrological cycle, falling snow is a primary contributor in northern latitudes during the winter seasons. Falling snow is the source of snow pack accumulations that provide fresh water resources for many communities in the world. Furthermore, falling snow impacts society by causing transportation disruptions during severe snow events. In order to collect information on the complete global precipitation cycle, both liquid and frozen precipitation must be collected. The challenges of estimating falling snow from space still exist though progress is being made. These challenges include weak falling snow signatures with respect to background (surface, water vapor) signatures for passive sensors over land surfaces, unknowns about the spherical and non-spherical shapes of the snowflakes, their particle size distributions (PSDs) and how the assumptions about the unknowns impact observed brightness temperatures or radar reflectivities, differences in near surface snowfall and total column snow amounts, and limited ground truth to validate against. While these challenges remain, knowledge of their impact on expected retrieval results is an important key for understanding falling snow retrieval estimations. Since falling snow from space is the next precipitation measurement challenge from space, information must be determined in order to guide retrieval algorithm development for these current and future missions. This information includes thresholds of detection for various sensor channel configurations, snow event system characteristics, snowflake particle assumptions, and surface types. For example, can a lake effect snow system with low (approx 2.5 km) cloud tops having an ice water content (IWC) at the surface of 0.25 g / cubic m and dendrite snowflakes be detected? If this information is known, we can focus retrieval efforts on detectable storms and concentrate advances on achievable results. Here, the focus is to determine thresholds of detection for falling snow for various snow conditions over land and lake surfaces. The results rely on simulated Weather Research Forecasting (WRF) simulations of falling snow cases since simulations provide all the information to determine the measurements from space and the ground truth. Sensitivity analyses were performed to better ascertain the relationships between multifrequency microwave and millimeter-wave sensor observations and the falling snow/underlying field of view. In addition, thresholds of detection for various sensor channel configurations, snow event system characteristics, snowflake particle assumptions, and surface types were studied. Results will be presented for active radar at Ku, Ka, and W-band and for passive radiometer channels from 10 to 183 GHz.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051