Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal
Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal
2013-01-01
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433
NASA Technical Reports Server (NTRS)
Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy
2016-01-01
Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.
Position Estimation Using Image Derivative
NASA Technical Reports Server (NTRS)
Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato
2015-01-01
This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.
Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T
2013-08-01
As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
ERIC Educational Resources Information Center
Engeln-Maddox, Renee; Miller, Steven A.
2008-01-01
This article details the development of the Critical Processing of Beauty Images Scale (CPBI) and studies demonstrating the psychometric soundness of this measure. The CPBI measures women's tendency to engage in critical processing of media images featuring idealized female beauty. Three subscales were identified using exploratory factor analysis…
Developing image processing meta-algorithms with data mining of multiple metrics.
Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
1992-05-01
ocean color for retrieving ocean k(490) values are examined. The validation of the optical database from the satellite is accessed through comparison...for sharing results of this validation study. We wish to thank J. Mueller for helpful discussions in optics and satellite processing and for sharing his...of these data products are displayable as 512 x 512 8-bit image maps compatible with the PC-SeaPak image format. Valid data ranges are from 1 to 255
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics
Cunha, Alexandre; Toga, A. W.; Parker, D. Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748
NASA Technical Reports Server (NTRS)
Cavalieri, Donald J. (Editor); Swift, Calvin T. (Editor)
1987-01-01
This document addresses the task of developing and executing a plan for validating the algorithm used for initial processing of sea ice data from the Special Sensor Microwave/Imager (SSMI). The document outlines a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community. Because of recent advances in the application of passive microwave remote sensing to snow cover on land, the validation of snow algorithms is also addressed.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
Quantification of chromatin condensation level by image processing.
Irianto, Jerome; Lee, David A; Knight, Martin M
2014-03-01
The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T
2015-01-01
Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.
Validation of Western North America Models based on finite-frequency and ray theory imaging methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmat, Carene; Maceira, Monica; Porritt, Robert W.
2015-02-02
We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-wave FF approach and the RT approach, when the data selection, processing and reference models are the same.
Morawski, Markus; Kirilina, Evgeniya; Scherf, Nico; Jäger, Carsten; Reimann, Katja; Trampel, Robert; Gavriilidis, Filippos; Geyer, Stefan; Biedermann, Bernd; Arendt, Thomas; Weiskopf, Nikolaus
2017-11-28
Recent breakthroughs in magnetic resonance imaging (MRI) enabled quantitative relaxometry and diffusion-weighted imaging with sub-millimeter resolution. Combined with biophysical models of MR contrast the emerging methods promise in vivo mapping of cyto- and myelo-architectonics, i.e., in vivo histology using MRI (hMRI) in humans. The hMRI methods require histological reference data for model building and validation. This is currently provided by MRI on post mortem human brain tissue in combination with classical histology on sections. However, this well established approach is limited to qualitative 2D information, while a systematic validation of hMRI requires quantitative 3D information on macroscopic voxels. We present a promising histological method based on optical 3D imaging combined with a tissue clearing method, Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging compatible Tissue hYdrogel (CLARITY), adapted for hMRI validation. Adapting CLARITY to the needs of hMRI is challenging due to poor antibody penetration into large sample volumes and high opacity of aged post mortem human brain tissue. In a pilot experiment we achieved transparency of up to 8 mm-thick and immunohistochemical staining of up to 5 mm-thick post mortem brain tissue by a combination of active and passive clearing, prolonged clearing and staining times. We combined 3D optical imaging of the cleared samples with tailored image processing methods. We demonstrated the feasibility for quantification of neuron density, fiber orientation distribution and cell type classification within a volume with size similar to a typical MRI voxel. The presented combination of MRI, 3D optical microscopy and image processing is a promising tool for validation of MRI-based microstructure estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Giaddui, Tawfik; Yu, Jialu; Manfredi, Denise; Linnemann, Nancy; Hunter, Joanne; O’Meara, Elizabeth; Galvin, James; Bialecki, Brian; Xiao, Ying
2016-01-01
Transmission of Imaging and Data (TRIAD) is a standard-based system built by the American College of Radiology (ACR) to provide seamless exchange of images and data for accreditation of clinical trials and registries. Scripts of structures’ names validation profiles created in TRIAD are used in the automated submission process. It is essential for users to understand the logistics of these scripts for successful submission of radiotherapy cases with less iteration. PMID:27053498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyhan, M; Yue, N
Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less
31 CFR 321.25 - Payment and retention of definitive securities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... prohibited from accepting an image, or other copy or reproduction of the definitive security, for redemption or processing. To ensure that all transactions processed by agents are properly validated, agents... converted to an electronic image. At a minimum, the agent must retain such securities for a period of thirty...
31 CFR 321.25 - Payment and retention of definitive securities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... prohibited from accepting an image, or other copy or reproduction of the definitive security, for redemption or processing. To ensure that all transactions processed by agents are properly validated, agents... converted to an electronic image. At a minimum, the agent must retain such securities for a period of thirty...
Classification of images acquired with colposcopy using artificial neural networks.
Simões, Priscyla W; Izumi, Narjara B; Casagrande, Ramon S; Venson, Ramon; Veronezi, Carlos D; Moretti, Gustavo P; da Rocha, Edroaldo L; Cechinel, Cristian; Ceretta, Luciane B; Comunello, Eros; Martins, Paulo J; Casagrande, Rogério A; Snoeyer, Maria L; Manenti, Sandra A
2014-01-01
To explore the advantages of using artificial neural networks (ANNs) to recognize patterns in colposcopy to classify images in colposcopy. Transversal, descriptive, and analytical study of a quantitative approach with an emphasis on diagnosis. The training test e validation set was composed of images collected from patients who underwent colposcopy. These images were provided by a gynecology clinic located in the city of Criciúma (Brazil). The image database (n = 170) was divided; 48 images were used for the training process, 58 images were used for the tests, and 64 images were used for the validation. A hybrid neural network based on Kohonen self-organizing maps and multilayer perceptron (MLP) networks was used. After 126 cycles, the validation was performed. The best results reached an accuracy of 72.15%, a sensibility of 69.78%, and a specificity of 68%. Although the preliminary results still exhibit an average efficiency, the present approach is an innovative and promising technique that should be deeply explored in the context of the present study.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12
2015-09-03
the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and Meteorological Satellite (COMS) satellite. Additionally, this...this capability works in conjunction with AOPS • Improvements to the AOPS mosaicking capability • Prepare the NRT Geostationary Ocean Color Imager...Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical Data Format (HDF) Integrated Data Processing System (IDPS
Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao
2017-01-01
A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537
Pavurala, Naresh; Xu, Xiaoming; Krishnaiah, Yellela S R
2017-05-15
Hyperspectral imaging using near infrared spectroscopy (NIRS) integrates spectroscopy and conventional imaging to obtain both spectral and spatial information of materials. The non-invasive and rapid nature of hyperspectral imaging using NIRS makes it a valuable process analytical technology (PAT) tool for in-process monitoring and control of the manufacturing process for transdermal drug delivery systems (TDS). The focus of this investigation was to develop and validate the use of Near Infra-red (NIR) hyperspectral imaging to monitor coat thickness uniformity, a critical quality attribute (CQA) for TDS. Chemometric analysis was used to process the hyperspectral image and a partial least square (PLS) model was developed to predict the coat thickness of the TDS. The goodness of model fit and prediction were 0.9933 and 0.9933, respectively, indicating an excellent fit to the training data and also good predictability. The % Prediction Error (%PE) for internal and external validation samples was less than 5% confirming the accuracy of the PLS model developed in the present study. The feasibility of the hyperspectral imaging as a real-time process analytical tool for continuous processing was also investigated. When the PLS model was applied to detect deliberate variation in coating thickness, it was able to predict both the small and large variations as well as identify coating defects such as non-uniform regions and presence of air bubbles. Published by Elsevier B.V.
Khazendar, S.; Sayasneh, A.; Al-Assam, H.; Du, H.; Kaijser, J.; Ferrara, L.; Timmerman, D.; Jassim, S.; Bourne, T.
2015-01-01
Introduction: Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. Objectives: In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Materials and methods: Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. Results: The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). Conclusion: We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered. PMID:25897367
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhang, Wei; Yan, Shaoze
2015-10-01
In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
Cache write generate for parallel image processing on shared memory architectures.
Wittenbrink, C M; Somani, A K; Chen, C H
1996-01-01
We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.
NASA Astrophysics Data System (ADS)
Benalcazar, Wladimir A.; Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B.; Boppart, Stephen A.
2009-02-01
We validate a molecular imaging technique called Nonlinear Interferometric Vibrational Imaging (NIVI) by comparing vibrational spectra with those acquired from Raman microscopy. This broadband coherent anti-Stokes Raman scattering (CARS) technique uses heterodyne detection and OCT acquisition and design principles to interfere a CARS signal generated by a sample with a local oscillator signal generated separately by a four-wave mixing process. These are mixed and demodulated by spectral interferometry. Its confocal configuration allows the acquisition of 3D images based on endogenous molecular signatures. Images from both phantom and mammary tissues have been acquired by this instrument and its spectrum is compared with its spontaneous Raman signatures.
NASA Astrophysics Data System (ADS)
Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang
2008-03-01
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.
NASA Astrophysics Data System (ADS)
Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten
2014-03-01
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
NASA Technical Reports Server (NTRS)
1993-01-01
Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.
The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar
NASA Astrophysics Data System (ADS)
Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian
2017-10-01
This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.
NASA Astrophysics Data System (ADS)
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; Martin, Aiden A.; Depond, Philip J.; Guss, Gabriel M.; Thampy, Vivek; Fong, Anthony Y.; Weker, Johanna Nelson; Stone, Kevin H.; Tassone, Christopher J.; Kramer, Matthew J.; Toney, Michael F.; Van Buuren, Anthony; Matthews, Manyalibo J.
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ˜1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ˜50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.
Calta, Nicholas P; Wang, Jenny; Kiss, Andrew M; Martin, Aiden A; Depond, Philip J; Guss, Gabriel M; Thampy, Vivek; Fong, Anthony Y; Weker, Johanna Nelson; Stone, Kevin H; Tassone, Christopher J; Kramer, Matthew J; Toney, Michael F; Van Buuren, Anthony; Matthews, Manyalibo J
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ∼1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ∼50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; ...
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less
Fingerprint image enhancement by differential hysteresis processing.
Blotta, Eduardo; Moler, Emilce
2004-05-10
A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.
Giaddui, Tawfik; Yu, Jialu; Manfredi, Denise; Linnemann, Nancy; Hunter, Joanne; O'Meara, Elizabeth; Galvin, James; Bialecki, Brian; Xiao, Ying
2016-01-01
Transmission of Imaging and Data (TRIAD) is a standard-based system built by the American College of Radiology to provide the seamless exchange of images and data for accreditation of clinical trials and registries. Scripts of structures' names validation profiles created in TRIAD are used in the automated submission process. It is essential for users to understand the logistics of these scripts for successful submission of radiation therapy cases with less iteration. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Geng, Xiaonan; Li, Qiang; Tsui, Pohsiang; Wang, Chiaoyin; Liu, Haoli
2013-09-01
To evaluate the reliability of diagnostic ultrasound-based temperature and elasticity imaging during radiofrequency ablation (RFA) through ex vivo experiments. Procine liver samples (n=7) were employed for RFA experiments with exposures of different power intensities (10 and 50w). The RFA process was monitored by a diagnostic ultrasound imager and the information were postoperatively captured for further temperature and elasticity image analysis. Infrared thermometry was concurrently applied to provide temperature change calibration during the RFA process. Results from this study demonstrated that temperature imaging was valid under 10 W RF exposure (r=0.95), but the ablation zone was no longer consistent with the reference infrared temperature distribution under high RF exposures. The elasticity change could well reflect the ablation zone under a 50 W exposure, whereas under low exposures, the thermal lesion could not be well detected due to the limited range of temperature elevation and incomplete tissue necrosis. Diagnostic ultrasound-based temperature and elastography is valid for monitoring thr RFA process. Temperature estimation can well reflect mild-power RF ablation dynamics, whereas the elastic-change estimation can can well predict the tissue necrosis. This study provide advances toward using diagnostic ultrasound to monitor RFA or other thermal-based interventions.
In-Line Monitoring of a Pharmaceutical Pan Coating Process by Optical Coherence Tomography.
Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Buchsbaum, Andreas; Pescod, Russel; Baele, Thomas; Khinast, Johannes G
2015-08-01
This work demonstrates a new in-line measurement technique for monitoring the coating growth of randomly moving tablets in a pan coating process. In-line quality control is performed by an optical coherence tomography (OCT) sensor allowing nondestructive and contact-free acquisition of cross-section images of film coatings in real time. The coating thickness can be determined directly from these OCT images and no chemometric calibration models are required for quantification. Coating thickness measurements are extracted from the images by a fully automated algorithm. Results of the in-line measurements are validated using off-line OCT images, thickness calculations from tablet dimension measurements, and weight gain measurements. Validation measurements are performed on sample tablets periodically removed from the process during production. Reproducibility of the results is demonstrated by three batches produced under the same process conditions. OCT enables a multiple direct measurement of the coating thickness on individual tablets rather than providing the average coating thickness of a large number of tablets. This gives substantially more information about the coating quality, that is, intra- and intertablet coating variability, than standard quality control methods. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D
2010-08-01
Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.
A software platform for phase contrast x-ray breast imaging research.
Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I
2015-06-01
To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Supervised restoration of degraded medical images using multiple-point geostatistics.
Pham, Tuan D
2012-06-01
Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
A REMOTE SENSING AND GIS-ENABLED HIGHWAY ASSET MANAGEMENT SYSTEM PHASE 2
DOT National Transportation Integrated Search
2018-02-02
The objective of this project is to validate the use of commercial remote sensing and spatial information (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile light detection and ranging (LiDAR), image processing algorit...
A remote sensing and GIS-enabled highway asset management system : final report.
DOT National Transportation Integrated Search
2016-04-01
The objective of this project is to validate the use of commercial remote sensing and spatial information : (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile LiDAR, image : processing algorithms, and GPS/GIS technolog...
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
NASA Technical Reports Server (NTRS)
Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.
1987-01-01
A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
NASA Astrophysics Data System (ADS)
Paloscia, S.; Pettinato, S.; Santi, E.; Pierdicca, N.; Pulvirenti, L.; Notarnicola, C.; Pace, G.; Reppucci, A.
2011-11-01
The main objective of this research is to develop, test and validate a soil moisture (SMC)) algorithm for the GMES Sentinel-1 characteristics, within the framework of an ESA project. The SMC product, to be generated from Sentinel-1 data, requires an algorithm able to process operationally in near-real-time and deliver the product to the GMES services within 3 hours from observations. Two different complementary approaches have been proposed: an Artificial Neural Network (ANN), which represented the best compromise between retrieval accuracy and processing time, thus allowing compliance with the timeliness requirements and a Bayesian Multi-temporal approach, allowing an increase of the retrieval accuracy, especially in case where little ancillary data are available, at the cost of computational efficiency, taking advantage of the frequent revisit time achieved by Sentinel-1. The algorithm was validated in several test areas in Italy, US and Australia, and finally in Spain with a 'blind' validation. The Multi-temporal Bayesian algorithm was validated in Central Italy. The validation results are in all cases very much in line with the requirements. However, the blind validation results were penalized by the availability of only VV polarization SAR images and MODIS lowresolution NDVI, although the RMS is slightly > 4%.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Quantum image processing: A review of advances in its security technologies
NASA Astrophysics Data System (ADS)
Yan, Fei; Iliyasu, Abdullah M.; Le, Phuc Q.
In this review, we present an overview of the advances made in quantum image processing (QIP) comprising of the image representations, the operations realizable on them, and the likely protocols and algorithms for their applications. In particular, we focus on recent progresses on QIP-based security technologies including quantum watermarking, quantum image encryption, and quantum image steganography. This review is aimed at providing readers with a succinct, yet adequate compendium of the progresses made in the QIP sub-area. Hopefully, this effort will stimulate further interest aimed at the pursuit of more advanced algorithms and experimental validations for available technologies and extensions to other domains.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
Ray, Pritha
2011-04-01
Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.
Novel optical scanning cryptography using Fresnel telescope imaging.
Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren
2015-07-13
We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon
2017-01-01
Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hachiya, Hiroyuki
1998-05-01
In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
StreakDet data processing and analysis pipeline for space debris optical observations
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri
We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. point-like vs. long, disintegrated streaks). To validate the single-image detections, the processing is finalized by orbital analysis, resulting in preliminary orbital classification (Earth-bound vs. non-Earth-bound orbit) for the detected streaks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Lu, Z; MacMahon, H
Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less
Automatic seed picking for brachytherapy postimplant validation with 3D CT images.
Zhang, Guobin; Sun, Qiyuan; Jiang, Shan; Yang, Zhiyong; Ma, Xiaodong; Jiang, Haisong
2017-11-01
Postimplant validation is an indispensable part in the brachytherapy technique. It provides the necessary feedback to ensure the quality of operation. The ability to pick implanted seed relates directly to the accuracy of validation. To address it, an automatic approach is proposed for picking implanted brachytherapy seeds in 3D CT images. In order to pick seed configuration (location and orientation) efficiently, the approach starts with the segmentation of seed from CT images using a thresholding filter which based on gray-level histogram. Through the process of filtering and denoising, the touching seed and single seed are classified. The true novelty of this approach is found in the application of the canny edge detection and improved concave points matching algorithm to separate touching seeds. Through the computation of image moments, the seed configuration can be determined efficiently. Finally, two different experiments are designed to verify the performance of the proposed approach: (1) physical phantom with 60 model seeds, and (2) patient data with 16 cases. Through assessment of validated results by a medical physicist, the proposed method exhibited promising results. Experiment on phantom demonstrates that the error of seed location and orientation is within ([Formula: see text]) mm and ([Formula: see text])[Formula: see text], respectively. In addition, the most seed location and orientation error is controlled within 0.8 mm and 3.5[Formula: see text] in all cases, respectively. The average process time of seed picking is 8.7 s per 100 seeds. In this paper, an automatic, efficient and robust approach, performed on CT images, is proposed to determine the implanted seed location as well as orientation in a 3D workspace. Through the experiments with phantom and patient data, this approach also successfully exhibits good performance.
Prahs, Philipp; Radeck, Viola; Mayer, Christian; Cvetkov, Yordan; Cvetkova, Nadezhda; Helbig, Horst; Märker, David
2018-01-01
Intravitreal injections with anti-vascular endothelial growth factor (anti-VEGF) medications have become the standard of care for their respective indications. Optical coherence tomography (OCT) scans of the central retina provide detailed anatomical data and are widely used by clinicians in the decision-making process of anti-VEGF indication. In recent years, significant progress has been made in artificial intelligence and computer vision research. We trained a deep convolutional artificial neural network to predict treatment indication based on central retinal OCT scans without human intervention. A total of 183,402 retinal OCT B-scans acquired between 2008 and 2016 were exported from the institutional image archive of a university hospital. OCT images were cross-referenced with the electronic institutional intravitreal injection records. OCT images with a following intravitreal injection during the first 21 days after image acquisition were assigned into the 'injection' group, while the same amount of random OCT images without intravitreal injections was labeled as 'no injection'. After image preprocessing, OCT images were split in a 9:1 ratio to training and test datasets. We trained a GoogLeNet inception deep convolutional neural network and assessed its performance on the validation dataset. We calculated prediction accuracy, sensitivity, specificity, and receiver operating characteristics. The deep convolutional neural network was successfully trained on the extracted clinical data. The trained neural network classifier reached a prediction accuracy of 95.5% on the images in the validation dataset. For single retinal B-scans in the validation dataset, a sensitivity of 90.1% and a specificity of 96.2% were achieved. The area under the receiver operating characteristic curve was 0.968 on a per B-scan image basis, and 0.988 by averaging over six B-scans per examination on the validation dataset. Deep artificial neural networks show impressive performance on classification of retinal OCT scans. After training on historical clinical data, machine learning methods can offer the clinician support in the decision-making process. Care should be taken not to mistake neural network output as treatment recommendation and to ensure a final thorough evaluation by the treating physician.
NASA Astrophysics Data System (ADS)
Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.
2015-12-01
According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.
Edge enhancement and image equalization by unsharp masking using self-adaptive photochromic filters.
Ferrari, José A; Flores, Jorge L; Perciante, César D; Frins, Erna
2009-07-01
A new method for real-time edge enhancement and image equalization using photochromic filters is presented. The reversible self-adaptive capacity of photochromic materials is used for creating an unsharp mask of the original image. This unsharp mask produces a kind of self filtering of the original image. Unlike the usual Fourier (coherent) image processing, the technique we propose can also be used with incoherent illumination. Validation experiments with Bacteriorhodopsin and photochromic glass are presented.
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12
2015-09-03
NPP) with the VIIRS sensor package as well as data from the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and...capability • Prepare the NRT Geostationary Ocean Color Imager (GOCI) data stream for integration into operations. • Improvements in sensor...Navy (DON) Environmental Data Records (EDRs) Expeditionary Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical
ERIC Educational Resources Information Center
Vorstenbosch, Marc A. T. M.; Bouter, Shifra T.; van den Hurk, Marianne M.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.
2014-01-01
Assessment is an important aspect of medical education because it tests students' competence and motivates them to study. Various assessment methods, with and without images, are used in the study of anatomy. In this study, we investigated the use of extended matching questions (EMQs). To gain insight into the influence of images on the…
Post-image acquisition processing approaches for coherent backscatter validation
NASA Astrophysics Data System (ADS)
Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.
2014-10-01
Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.
Forward ultrasonic model validation using wavefield imaging methods
NASA Astrophysics Data System (ADS)
Blackshire, James L.
2018-04-01
The validation of forward ultrasonic wave propagation models in a complex titanium polycrystalline material system is accomplished using wavefield imaging methods. An innovative measurement approach is described that permits the visualization and quantitative evaluation of bulk elastic wave propagation and scattering behaviors in the titanium material for a typical focused immersion ultrasound measurement process. Results are provided for the determination and direct comparison of the ultrasonic beam's focal properties, mode-converted shear wave position and angle, and scattering and reflection from millimeter-sized microtexture regions (MTRs) within the titanium material. The approach and results are important with respect to understanding the root-cause backscatter signal responses generated in aerospace engine materials, where model-assisted methods are being used to understand the probabilistic nature of the backscatter signal content. Wavefield imaging methods are shown to be an effective means for corroborating and validating important forward model predictions in a direct manner using time- and spatially-resolved displacement field amplitude measurements.
NASA Astrophysics Data System (ADS)
See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz
2016-04-01
The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.
Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing
NASA Technical Reports Server (NTRS)
Gradl, Paul R.; Schmidt, Tim
2016-01-01
Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
Optical coherence tomography imaging based on non-harmonic analysis
NASA Astrophysics Data System (ADS)
Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya
2009-11-01
A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications
Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.
2014-01-01
Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733
NASA Astrophysics Data System (ADS)
Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang
2017-02-01
A new method to eliminate the security risk of the well-known interference-based optical cryptosystem is proposed. In this method, which is suitable for security authentication application, two phase-only masks are separately placed at different distances from the output plane, where a certification image (public image) can be obtained. To further increase the security and flexibility of this authentication system, we employ one more validation image (secret image), which can be observed at another output plane, for confirming the identity of the user. Only if the two correct masks are properly settled at their positions one could obtain two significant images. Besides, even if the legal users exchange their masks (keys), the authentication process will fail and the authentication results will not reveal any information. Numerical simulations are performed to demonstrate the validity and security of the proposed method.
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
A new method of SC image processing for confluence estimation.
Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina
2017-10-01
Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel image processing approach to detect malaria
NASA Astrophysics Data System (ADS)
Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev
2015-09-01
In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.
NASA Technical Reports Server (NTRS)
Chien, Steve A.
1996-01-01
A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.
Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination
Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
Bidirectional light-scattering image processing method for high-concentration jet sprays
NASA Astrophysics Data System (ADS)
Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.
1985-01-01
In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.
Anthropometric body measurements based on multi-view stereo image reconstruction.
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.
Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
Applications of LC-MS in PET Radioligand Development and Metabolic Elucidation
Ma, Ying; Kiesewetter, Dale O.; Lang, Lixin; Gu, Dongyu; Chen, Xiaoyuan
2013-01-01
Positron emission tomography (PET) is a very sensitive molecular imaging technique that when employed with an appropriate radioligand has the ability to quantititate physiological processes in a non-invasive manner. Since the imaging technique detects all radioactive emissions in the field of view, the presence and biological activity of radiolabeled metabolites must be determined for each radioligand in order to validate the utility of the radiotracer for measuring the desired physiological process. Thus, the identification of metabolic profiles of radiolabeled compounds is an important aspect of design, development, and validation of new radiopharmaceuticals and their applications in drug development and molecular imaging. Metabolite identification for different chemical classes of radiopharmaceuticals allows rational design to minimize the formation and accumulation of metabolites in the target tissue, either through enhanced excretion or minimized metabolism. This review will discuss methods for identifying and quantitating metabolites during the pre-clinical development of radiopharmaceuticals with special emphasis on the application of LC/MS. PMID:20540692
Discriminability limits in spatio-temporal stereo block matching.
Jain, Ankit K; Nguyen, Truong Q
2014-05-01
Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.
Cognitive representations of AIDS: a phenomenological study.
Anderson, Elizabeth H; Spencer, Margaret Hull
2002-12-01
Cognitive representations of illness determine behavior. How persons living with AIDS image their disease might be key to understanding medication adherence and other health behaviors. The authors' purpose was to describe AIDS patients' cognitive representations of their illness. A purposive sample of 58 men and women with AIDS were interviewed. Using Colaizzi's (1978) phenomenological method, rigor was established through application of verification, validation, and validity. From 175 significant statements, 11 themes emerged. Cognitive representations included imaging AIDS as death, bodily destruction, and just a disease. Coping focused on wiping AIDS out of the mind, hoping for the right drug, and caring for oneself. Inquiring about a patient's image of AIDS might help nurses assess coping processes and enhance nurse-patient relationships.
Asymmetry and irregularity border as discrimination factor between melanocytic lesions
NASA Astrophysics Data System (ADS)
Sbrissa, David; Pratavieira, Sebastião.; Salvio, Ana Gabriela; Kurachi, Cristina; Bagnato, Vanderlei Salvadori; Costa, Luciano Da Fontoura; Travieso, Gonzalo
2015-06-01
Image processing tools have been widely used in systems supporting medical diagnosis. The use of mobile devices for the diagnosis of melanoma can assist doctors and improve their diagnosis of a melanocytic lesion. This study proposes a method of image analysis for melanoma discrimination from other types of melanocytic lesions, such as regular and atypical nevi. The process is based on extracting features related with asymmetry and border irregularity. It were collected 104 images, from medical database of two years. The images were obtained with standard digital cameras without lighting and scale control. Metrics relating to the characteristics of shape, asymmetry and curvature of the contour were extracted from segmented images. Linear Discriminant Analysis was performed for dimensionality reduction and data visualization. Segmentation results showed good efficiency in the process, with approximately 88:5% accuracy. Validation results presents sensibility and specificity 85% and 70% for melanoma detection, respectively.
PyDBS: an automated image processing workflow for deep brain stimulation surgery.
D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre
2015-02-01
Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.
Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.
2016-01-01
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012
Hoffman, R A; Kothari, S; Phan, J H; Wang, M D
Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
PIV Data Validation Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.
NASA Astrophysics Data System (ADS)
Zhang, Kang
2011-12-01
In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.
Validating a new methodology for strain estimation from cardiac cine MRI
NASA Astrophysics Data System (ADS)
Elnakib, Ahmed; Beache, Garth M.; Gimel'farb, Georgy; Inanc, Tamer; El-Baz, Ayman
2013-10-01
This paper focuses on validating a novel framework for estimating the functional strain from cine cardiac magnetic resonance imaging (CMRI). The framework consists of three processing steps. First, the left ventricle (LV) wall borders are segmented using a level-set based deformable model. Second, the points on the wall borders are tracked during the cardiac cycle based on solving the Laplace equation between the LV edges. Finally, the circumferential and radial strains are estimated at the inner, mid-wall, and outer borders of the LV wall. The proposed framework is validated using synthetic phantoms of the material strains that account for the physiological features and the LV response during the cardiac cycle. Experimental results on simulated phantom images confirm the accuracy and robustness of our method.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Quality control and assurance for validation of DOS/I measurements
NASA Astrophysics Data System (ADS)
Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.
2010-02-01
Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Buckler, Andrew J; Bresolin, Linda; Dunnick, N Reed; Sullivan, Daniel C; Aerts, Hugo J W L; Bendriem, Bernard; Bendtsen, Claus; Boellaard, Ronald; Boone, John M; Cole, Patricia E; Conklin, James J; Dorfman, Gary S; Douglas, Pamela S; Eidsaunet, Willy; Elsinger, Cathy; Frank, Richard A; Gatsonis, Constantine; Giger, Maryellen L; Gupta, Sandeep N; Gustafson, David; Hoekstra, Otto S; Jackson, Edward F; Karam, Lisa; Kelloff, Gary J; Kinahan, Paul E; McLennan, Geoffrey; Miller, Colin G; Mozley, P David; Muller, Keith E; Patt, Rick; Raunig, David; Rosen, Mark; Rupani, Haren; Schwartz, Lawrence H; Siegel, Barry A; Sorensen, A Gregory; Wahl, Richard L; Waterton, John C; Wolf, Walter; Zahlmann, Gudrun; Zimmerman, Brian
2011-06-01
Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1. RSNA, 2011
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-01-01
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760
Garment counting in a textile warehouse by means of a laser imaging system.
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-04-29
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
ERIC Educational Resources Information Center
Vogelaar, Robert J.
2005-01-01
In this project a product to aid educational leaders in the process of communicating in crisis situations is presented. The product was created and received a formative evaluation using an educational research and development methodology. Ultimately, an administrative training course that utilized an Image Repair Situational Theory was developed.…
"Seeing is believing": perspectives of applying imaging technology in discovery toxicology.
Xu, Jinghai James; Dunn, Margaret Condon; Smith, Arthur Russell
2009-11-01
Efficiency and accuracy in addressing drug safety issues proactively are critical in minimizing late-stage drug attritions. Discovery toxicology has become a specialty subdivision of toxicology seeking to effectively provide early predictions and safety assessment in the drug discovery process. Among the many technologies utilized to select safer compounds for further development, in vitro imaging technology is one of the best characterized and validated to provide translatable biomarkers towards clinically-relevant outcomes of drug safety. By carefully applying imaging technologies in genetic, hepatic, and cardiac toxicology, and integrating them with the rest of the drug discovery processes, it was possible to demonstrate significant impact of imaging technology on drug research and development and substantial returns on investment.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
Dos Santos, Denise Takehana; Costa e Silva, Adriana Paula Andrade; Vannier, Michael Walter; Cavalcanti, Marcelo Gusmão Paraiso
2004-12-01
The purpose of this study was to demonstrate the sensitivity and specificity of multislice computerized tomography (CT) for diagnosis of maxillofacial fractures following specific protocols using an independent workstation. The study population consisted of 56 patients with maxillofacial fractures who were submitted to a multislice CT. The original data were transferred to an independent workstation using volumetric imaging software to generate axial images and simultaneous multiplanar (MPR) and 3-dimensional (3D-CT) volume rendering reconstructed images. The images were then processed and interpreted by 2 examiners using the following protocols independently of each other: axial, MPR/axial, 3D-CT images, and the association of axial/MPR/3D images. The clinical/surgical findings were considered the gold standard corroborating the diagnosis of the fractures and their anatomic localization. The statistical analysis was carried out using validity and chi-squared tests. The association of axial/MPR/3D images indicated a higher sensitivity (range 95.8%) and specificity (range 99%) than the other methods regarding the analysis of all regions. CT imaging demonstrated high specificity and sensitivity for maxillofacial fractures. The association of axial/MPR/3D-CT images added important information in relationship to other CT protocols.
Veronezi, Carlos Cassiano Denipotti; de Azevedo Simões, Priscyla Waleska Targino; Dos Santos, Robson Luiz; da Rocha, Edroaldo Lummertz; Meláo, Suelen; de Mattos, Merisandra Côrtes; Cechinel, Cristian
2011-01-01
To ascertain the advantages of applying artificial neural networks to recognize patterns on lumbar spine radiographies in order to aid in the process of diagnosing primary osteoarthritis. This was a cross-sectional descriptive analytical study with a quantitative approach and an emphasis on diagnosis. The training set was composed of images collected between January and July 2009 from patients who had undergone lateral-view digital radiographies of the lumbar spine, which were provided by a radiology clinic located in the municipality of Criciúma (SC). Out of the total of 260 images gathered, those with distortions, those presenting pathological conditions that altered the architecture of the lumbar spine and those with patterns that were difficult to characterize were discarded, resulting in 206 images. The image data base (n = 206) was then subdivided, resulting in 68 radiographies for the training stage, 68 images for tests and 70 for validation. A hybrid neural network based on Kohonen self-organizing maps and on Multilayer Perceptron networks was used. After 90 cycles, the validation was carried out on the best results, achieving accuracy of 62.85%, sensitivity of 65.71% and specificity of 60%. Even though the effectiveness shown was moderate, this study is still innovative. The values show that the technique used has a promising future, pointing towards further studies on image and cycle processing methodology with a larger quantity of radiographies.
An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications
Tong, Mingsi; Song, John; Chu, Wei
2015-01-01
The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441
An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.
Tong, Mingsi; Song, John; Chu, Wei
2015-01-01
The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation.
X-ray CT analysis of pore structure in sand
NASA Astrophysics Data System (ADS)
Mukunoki, Toshifumi; Miyata, Yoshihisa; Mikami, Kazuaki; Shiota, Erika
2016-06-01
The development of microfocused X-ray computed tomography (CT) devices enables digital imaging analysis at the pore scale. The applications of these devices are diverse in soil mechanics, geotechnical and geoenvironmental engineering, petroleum engineering, and agricultural engineering. In particular, the imaging of the pore space in porous media has contributed to numerical simulations for single-phase and multiphase flows or contaminant transport through the pore structure as three-dimensional image data. These obtained results are affected by the pore diameter; therefore, it is necessary to verify the image preprocessing for the image analysis and to validate the pore diameters obtained from the CT image data. Moreover, it is meaningful to produce the physical parameters in a representative element volume (REV) and significant to define the dimension of the REV. This paper describes the underlying method of image processing and analysis and discusses the physical properties of Toyoura sand for the verification of the image analysis based on the definition of the REV. On the basis of the obtained verification results, a pore-diameter analysis can be conducted and validated by a comparison with the experimental work and image analysis. The pore diameter is deduced from Young-Laplace's law and a water retention test for the drainage process. The results from previous study and perforated-pore diameter originally proposed in this study, called the voxel-percolation method (VPM), are compared in this paper. In addition, the limitations of the REV, the definition of the pore diameter, and the effectiveness of the VPM for an assessment of the pore diameter are discussed.
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
Estimation of Characteristics of Echo Envelope Using RF Echo Signal from the Liver
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Ikeda, Kazuki; Moriyasu, Norifumi
2001-05-01
To realize quantitative diagnosis of liver cirrhosis, we have been analyzing the probability density function (PDF) of echo amplitude using B-mode images. However, the B-mode image is affected by the various signal and image processing techniques used in the diagnosis equipment, so a detailed and quantitative analysis is very difficult. In this paper, we analyze the PDF of echo amplitude using RF echo signal and B-mode images of normal and cirrhotic livers, and compare both results to examine the validity of the RF echo signal.
Zhang, Xintong; Bi, Anyao; Gao, Quansheng; Zhang, Shuai; Huang, Kunzhu; Liu, Zhiguo; Gao, Tang; Zeng, Wenbin
2016-01-20
The olfactory system of organisms serves as a genetically and anatomically model for studying how sensory input can be translated into behavior output. Some neurologic diseases are considered to be related to olfactory disturbance, especially Alzheimer's disease, Parkinson's disease, multiple sclerosis, and so forth. However, it is still unclear how the olfactory system affects disease generation processes and olfaction delivery processes. Molecular imaging, a modern multidisciplinary technology, can provide valid tools for the early detection and characterization of diseases, evaluation of treatment, and study of biological processes in living subjects, since molecular imaging applies specific molecular probes as a novel approach to produce special data to study biological processes in cellular and subcellular levels. Recently, molecular imaging plays a key role in studying the activation of olfactory system, thus it could help to prevent or delay some diseases. Herein, we present a comprehensive review on the research progress of the imaging probes for visualizing olfactory system, which is classified on different imaging modalities, including PET, MRI, and optical imaging. Additionally, the probes' design, sensing mechanism, and biological application are discussed. Finally, we provide an outlook for future studies in this field.
Functional Imaging Biomarkers: Potential to Guide an Individualised Approach to Radiotherapy.
Prestwich, R J D; Vaidyanathan, S; Scarsbrook, A F
2015-10-01
The identification of robust prognostic and predictive biomarkers would transform the ability to implement an individualised approach to radiotherapy. In this regard, there has been a surge of interest in the use of functional imaging to assess key underlying biological processes within tumours and their response to therapy. Importantly, functional imaging biomarkers hold the potential to evaluate tumour heterogeneity/biology both spatially and temporally. An ever-increasing range of functional imaging techniques is now available primarily involving positron emission tomography and magnetic resonance imaging. Small-scale studies across multiple tumour types have consistently been able to correlate changes in functional imaging parameters during radiotherapy with disease outcomes. Considerable challenges remain before the implementation of functional imaging biomarkers into routine clinical practice, including the inherent temporal variability of biological processes within tumours, reproducibility of imaging, determination of optimal imaging technique/combinations, timing during treatment and design of appropriate validation studies. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Restoration of color in a remote sensing image and its quality evaluation
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe
2003-09-01
This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.
NASA Astrophysics Data System (ADS)
Larson, David J., Jr.; Casagrande, Louis G.; Di Marzio, Don; Levy, Alan; Carlson, Frederick M.; Lee, Taipao; Black, David R.; Wu, Jun; Dudley, Michael
1994-07-01
We have successfully validated theoretical models of seeded vertical Bridgman-Stockbarger CdZnTe crystal growth and post-solidification processing, using in-situ thermal monitoring and innovative material characterization techniques. The models predict the thermal gradients, interface shape, fluid flow and solute redistribution during solidification, as well as the distributions of accumulated excess stress that causes defect generation and redistribution. Data from the furnace and ampoule wall have validated predictions from the thermal model. Results are compared to predictions of the thermal and thermo-solutal models. We explain the measured initial, change-of-rate, and terminal compositional transients as well as the macrosegregation. Macro and micro-defect distributions have been imaged on CdZnTe wafers from 40 mm diameter boules. Superposition of topographic defect images and predicted excess stress patterns suggests the origin of some frequently encountered defects, particularly on a macro scale, to result from the applied and accumulated stress fields and the anisotropic nature of the CdZnTe crystal. Implications of these findings with respect to producibility are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Validation of Inertial and Optical Navigation Techniques for Space Applications with UAVS
NASA Astrophysics Data System (ADS)
Montaño, J.; Wis, M.; Pulido, J. A.; Latorre, A.; Molina, P.; Fernández, E.; Angelats, E.; Colomina, I.
2015-09-01
PERIGEO is an R&D project, funded by the INNPRONTA 2011-2014 programme from Spanish CDTI, which aims to investigate the use of UAV technologies and processes for the validation of space oriented technologies. For this purpose, among different space missions and technologies, a set of activities for absolute and relative navigation are being carried out to deal with the attitude and position estimation problem from a temporal image sequence from a camera on the visible spectrum and/or Light Detection and Ranging (LIDAR) sensor. The process is covered entirely: from sensor measurements and data acquisition (images, LiDAR ranges and angles), data pre-processing (calibration and co-registration of camera and LIDAR data), features and landmarks extraction from the images and image/LiDAR-based state estimation. In addition to image processing area, classical navigation system based on inertial sensors is also included in the research. The reason of combining both approaches is to enable the possibility to keep navigation capability in environments or missions where the radio beacon or reference signal as the GNSS satellite is not available (as for example an atmospheric flight in Titan). The rationale behind the combination of those systems is that they complement each other. The INS is capable of providing accurate position, velocity and full attitude estimations at high data rates. However, they need an absolute reference observation to compensate the time accumulative errors caused by inertial sensor inaccuracies. On the other hand, imaging observables can provide absolute and relative positioning and attitude estimations. However they need that the sensor head is pointing toward ground (something that may not be possible if the carrying platform is maneuvering) to provide accurate estimations and they are not capable of provide some hundreds of Hz that can deliver an INS. This mutual complementarity has been observed in PERIGEO and because of this they are combined into one system. The inertial navigation system implemented in PERIGEO is based on a classical loosely coupled INS/GNSS approach that is very similar to the implementation of the INS/Imaging navigation system that is mentioned above. The activities envisaged in PERIGEO cover the algorithms development and validation and technology testing on UAVs under representative conditions. Past activities have covered the design and development of the algorithms and systems. This paper presents the most recent activities and results on the area of image processing for robust estimation within PERIGEO, which are related with the hardware platforms definition (including sensors) and its integration in UAVs. Results for the tests performed during the flight campaigns in representative outdoor environments will be also presented (at the time of the full paper submission the tests will be performed), as well as analyzed, together with a roadmap definition for future developments.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
Automatic classification of tissue malignancy for breast carcinoma diagnosis.
Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo
2018-05-01
Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fuzzy Logic Enhanced Digital PIV Processing Software
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1999-01-01
Digital Particle Image Velocimetry (DPIV) is an instantaneous, planar velocity measurement technique that is ideally suited for studying transient flow phenomena in high speed turbomachinery. DPIV is being actively used at the NASA Glenn Research Center to study both stable and unstable operating conditions in a high speed centrifugal compressor. Commercial PIV systems are readily available which provide near real time feedback of the PIV image data quality. These commercial systems are well designed to facilitate the expedient acquisition of PIV image data. However, as with any general purpose system, these commercial PIV systems do not meet all of the data processing needs required for PIV image data reduction in our compressor research program. An in-house PIV PROCessing (PIVPROC) code has been developed for reducing PIV data. The PIVPROC software incorporates fuzzy logic data validation for maximum information recovery from PIV image data. PIVPROC enables combined cross-correlation/particle tracking wherein the highest possible spatial resolution velocity measurements are obtained.
Application of ultrasound processed images in space: Quanitative assessment of diffuse affectations
NASA Astrophysics Data System (ADS)
Pérez-Poch, A.; Bru, C.; Nicolau, C.
The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.
Application of ultrasound processed images in space: assessing diffuse affectations
NASA Astrophysics Data System (ADS)
Pérez-Poch, A.; Bru, C.; Nicolau, C.
The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.
Edge detection for optical synthetic aperture based on deep neural network
NASA Astrophysics Data System (ADS)
Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2017-09-01
Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.
Validation of On-board Cloud Cover Assessment Using EO-1
NASA Technical Reports Server (NTRS)
Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua
2003-01-01
The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.
Discriminative feature representation: an effective postprocessing solution to low dose CT imaging
NASA Astrophysics Data System (ADS)
Chen, Yang; Liu, Jin; Hu, Yining; Yang, Jian; Shi, Luyao; Shu, Huazhong; Gui, Zhiguo; Coatrieux, Gouenou; Luo, Limin
2017-03-01
This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach. This research was supported by National Natural Science Foundation under grants (81370040, 81530060), the Fundamental Research Funds for the Central Universities, and the Qing Lan Project in Jiangsu Province.
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
A Quality Sorting of Fruit Using a New Automatic Image Processing Method
NASA Astrophysics Data System (ADS)
Amenomori, Michihiro; Yokomizu, Nobuyuki
This paper presents an innovative approach for quality sorting of objects such as apples sorting in an agricultural factory, using an image processing algorithm. The objective of our approach are; firstly to sort the objects by their colors precisely; secondly to detect any irregularity of the colors surrounding the apples efficiently. An experiment has been conducted and the results have been obtained and compared with that has been preformed by human sorting process and by color sensor sorting devices. The results demonstrate that our approach is capable to sort the objects rapidly and the percentage of classification valid rate was 100 %.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Real-time motion artifacts compensation of ToF sensors data on GPU
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas
2013-05-01
Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.
Analysis Of The IJCNN 2011 UTL Challenge
2012-01-13
large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...validation and final evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205...documents [3]. Transfer learning methods could accelerate the application of handwriting recognizers to historical manuscript by reducing the need for
Onboard FPGA-based SAR processing for future spaceborne systems
NASA Technical Reports Server (NTRS)
Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce;
2004-01-01
We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.
Towards automatic patient selection for chemotherapy in colorectal cancer trials
NASA Astrophysics Data System (ADS)
Wright, Alexander; Magee, Derek; Quirke, Philip; Treanor, Darren E.
2014-03-01
A key factor in the prognosis of colorectal cancer, and its response to chemoradiotherapy, is the ratio of cancer cells to surrounding tissue (the so called tumour:stroma ratio). Currently tumour:stroma ratio is calculated manually, by examining H&E stained slides and counting the proportion of area of each. Virtual slides facilitate this analysis by allowing pathologists to annotate areas of tumour on a given digital slide image, and in-house developed stereometry tools mark random, systematic points on the slide, known as spots. These spots are examined and classified by the pathologist. Typical analyses require a pathologist to score at least 300 spots per tumour. This is a time consuming (10- 60 minutes per case) and laborious task for the pathologist and automating this process is highly desirable. Using an existing dataset of expert-classified spots from one colorectal cancer clinical trial, an automated tumour:stroma detection algorithm has been trained and validated. Each spot is extracted as an image patch, and then processed for feature extraction, identifying colour, texture, stain intensity and object characteristics. These features are used as training data for a random forest classification algorithm, and validated against unseen image patches. This process was repeated for multiple patch sizes. Over 82,000 such patches have been used, and results show an accuracy of 79%, depending on image patch size. A second study examining contextual requirements for pathologist scoring was conducted and indicates that further analysis of structures within each image patch is required in order to improve algorithm accuracy.
Validation of alternative methods for toxicity testing.
Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M
1998-01-01
Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695
Viirs Land Science Investigator-Led Processing System
NASA Astrophysics Data System (ADS)
Devadiga, S.; Mauoka, E.; Roman, M. O.; Wolfe, R. E.; Kalb, V.; Davidson, C. C.; Ye, G.
2015-12-01
The objective of the NASA's Suomi National Polar Orbiting Partnership (S-NPP) Land Science Investigator-led Processing System (Land SIPS), housed at the NASA Goddard Space Flight Center (GSFC), is to produce high quality land products from the Visible Infrared Imaging Radiometer Suite (VIIRS) to extend the Earth System Data Records (ESDRs) developed from NASA's heritage Earth Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS Terra and Aqua satellites. In this paper we will present the functional description and capabilities of the S-NPP Land SIPS, including system development phases and production schedules, timeline for processing, and delivery of land science products based on coordination with the S-NPP Land science team members. The Land SIPS processing stream is expected to be operational by December 2016, generating land products either using the NASA science team delivered algorithms, or the "best-of" science algorithms currently in operation at NASA's Land Product Evaluation and Algorithm Testing Element (PEATE). In addition to generating the standard land science products through processing of the NASA's VIIRS Level 0 data record, the Land SIPS processing system is also used to produce a suite of near-real time products for NASA's application community. Land SIPS will also deliver the standard products, ancillary data sets, software and supporting documentation (ATBDs) to the assigned Distributed Active Archive Centers (DAACs) for archival and distribution. Quality assessment and validation will be an integral part of the Land SIPS processing system; the former being performed at Land Data Operational Product Evaluation (LDOPE) facility, while the latter under the auspices of the CEOS Working Group on Calibration & Validation (WGCV) Land Product Validation (LPV) Subgroup; adopting the best-practices and tools used to assess the quality of heritage EOS-MODIS products generated at the MODIS Adaptive Processing System (MODAPS).
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Measurement of smaller colon polyp in CT colonography images using morphological image processing.
Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K
2017-11-01
Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].
Veronezi, Carlos Cassiano Denipotti; de Azevedo Simões, Priscyla Waleska Targino; dos Santos, Robson Luiz; da Rocha, Edroaldo Lummertz; Meláo, Suelen; de Mattos, Merisandra Côrtes; Cechinel, Cristian
2015-01-01
Objective: To ascertain the advantages of applying artificial neural networks to recognize patterns on lumbar spine radiographies in order to aid in the process of diagnosing primary osteoarthritis. Methods: This was a cross-sectional descriptive analytical study with a quantitative approach and an emphasis on diagnosis. The training set was composed of images collected between January and July 2009 from patients who had undergone lateral-view digital radiographies of the lumbar spine, which were provided by a radiology clinic located in the municipality of Criciúma (SC). Out of the total of 260 images gathered, those with distortions, those presenting pathological conditions that altered the architecture of the lumbar spine and those with patterns that were difficult to characterize were discarded, resulting in 206 images. The image data base (n = 206) was then subdivided, resulting in 68 radiographies for the training stage, 68 images for tests and 70 for validation. A hybrid neural network based on Kohonen self-organizing maps and on Multilayer Perceptron networks was used. Results: After 90 cycles, the validation was carried out on the best results, achieving accuracy of 62.85%, sensitivity of 65.71% and specificity of 60%. Conclusions: Even though the effectiveness shown was moderate, this study is still innovative. The values show that the technique used has a promising future, pointing towards further studies on image and cycle processing methodology with a larger quantity of radiographies. PMID:27027010
Celi, Simona; Berti, Sergio
2014-10-01
Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. Copyright © 2014 Elsevier B.V. All rights reserved.
The 3D scanner prototype utilize object profile imaging using line laser and octave software
NASA Astrophysics Data System (ADS)
Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus
2016-11-01
Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.
Yeo, Inhwan Jason; Jung, Jae Won; Yi, Byong Yong; Kim, Jong Oh
2013-01-01
Purpose: When an intensity-modulated radiation beam is delivered to a moving target, the interplay effect between dynamic beam delivery and the target motion due to miss-synchronization can cause unpredictable dose delivery. The portal dose image in electronic portal imaging device (EPID) represents radiation attenuated and scattered through target media. Thus, it may possess information about delivered radiation to the target. Using a continuous scan (cine) mode of EPID, which provides temporal dose images related to target and beam movements, the authors’ goal is to perform four-dimensional (4D) dose reconstruction. Methods: To evaluate this hypothesis, first, the authors have derived and subsequently validated a fast method of dose reconstruction based on virtual beamlet calculations of dose responses using a test intensity-modulated beam. This method was necessary for processing a large number of EPID images pertinent for four-dimensional reconstruction. Second, cine mode acquisition after summation over all images was validated through comparison with integration mode acquisition on EPID (IAS3 and aS1000) for the test beam. This was to confirm the agreement of the cine mode with the integrated mode, specifically for the test beam, which is an accepted mode of image acquisition for dosimetry with EPID. Third, in-phantom film and exit EPID dosimetry was performed on a moving platform using the same beam. Heterogeneous as well as homogeneous phantoms were used. The cine images were temporally sorted at 10% interval. The authors have performed dose reconstruction to the in-phantom plane from the sorted cine images using the above validated method of dose reconstruction. The reconstructed dose from each cine image was summed to compose a total reconstructed dose from the test beam delivery, and was compared with film measurements. Results: The new method of dose reconstruction was validated showing greater than 95.3% pass rates of the gamma test with the criteria of dose difference of 3% and distance to agreement of 3 mm. The dose comparison of the reconstructed dose with the measured dose for the two phantoms showed pass rates higher than 96.4% given the same criteria. Conclusions: Feasibility of 4D dose reconstruction was successfully demonstrated in this study. The 4D dose reconstruction demonstrated in this study can be a promising dose validation method for radiation delivery on moving organs. PMID:23635250
A CANDLE for a deeper in vivo insight
Coupé, Pierrick; Munz, Martin; Manjón, Jose V; Ruthazer, Edward S; Louis Collins, D.
2012-01-01
A new Collaborative Approach for eNhanced Denoising under Low-light Excitation (CANDLE) is introduced for the processing of 3D laser scanning multiphoton microscopy images. CANDLE is designed to be robust for low signal-to-noise ratio (SNR) conditions typically encountered when imaging deep in scattering biological specimens. Based on an optimized non-local means filter involving the comparison of filtered patches, CANDLE locally adapts the amount of smoothing in order to deal with the noise inhomogeneity inherent to laser scanning fluorescence microscopy images. An extensive validation on synthetic data, images acquired on microspheres and in vivo images is presented. These experiments show that the CANDLE filter obtained competitive results compared to a state-of-the-art method and a locally adaptive optimized nonlocal means filter, especially under low SNR conditions (PSNR<8dB). Finally, the deeper imaging capabilities enabled by the proposed filter are demonstrated on deep tissue in vivo images of neurons and fine axonal processes in the Xenopus tadpole brain. PMID:22341767
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang
2016-01-01
Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341
Multimodal imaging of ischemic wounds
NASA Astrophysics Data System (ADS)
Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald
2012-12-01
The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.
Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian
2016-01-20
This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.
2012-01-01
A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
FibrilJ: ImageJ plugin for fibrils' diameter and persistence length determination
NASA Astrophysics Data System (ADS)
Sokolov, P. A.; Belousov, M. V.; Bondarev, S. A.; Zhouravleva, G. A.; Kasyanenko, N. A.
2017-05-01
Application of microscopy to evaluate the morphology and size of filamentous proteins and amyloids requires new and creative approaches to simplify and automate the image processing. The estimation of mean values of fibrils diameter, length and bending stiffness on micrographs is a major challenge. For this purpose we developed an open-source FibrilJ plugin for the ImageJ/FiJi program. It automatically recognizes the fibrils on the surface of a mica, silicon, gold or formvar film and further analyzes them to calculate the distribution of fibrils by diameters, lengths and persistence lengths. The plugin has been validated by the processing of TEM images of fibrils formed by Sup35NM yeast protein and artificially created images of rod-shape objects with predefined parameters. Novel data obtained by SEM for Sup35NM protein fibrils immobilized on silicon and gold substrates are also presented and analyzed.
The development of a digitising service centre for natural history collections
Tegelberg, Riitta; Haapala, Jaana; Mononen, Tero; Pajari, Mika; Saarenmaa, Hannu
2012-01-01
Abstract Digitarium is a joint initiative of the Finnish Museum of Natural History and the University of Eastern Finland. It was established in 2010 as a dedicated shop for the large-scale digitisation of natural history collections. Digitarium offers service packages based on the digitisation process, including tagging, imaging, data entry, georeferencing, filtering, and validation. During the process, all specimens are imaged, and distance workers take care of the data entry from the images. The customer receives the data in Darwin Core Archive format, as well as images of the specimens and their labels. Digitarium also offers the option of publishing images through Morphbank, sharing data through GBIF, and archiving data for long-term storage. Service packages can also be designed on demand to respond to the specific needs of the customer. The paper also discusses logistics, costs, and intellectual property rights (IPR) issues related to the work that Digitarium undertakes. PMID:22859879
Efficient HIK SVM learning for image classification.
Wu, Jianxin
2012-10-01
Histograms are used in almost every aspect of image processing and computer vision, from visual descriptors to image representations. Histogram intersection kernel (HIK) and support vector machine (SVM) classifiers are shown to be very effective in dealing with histograms. This paper presents contributions concerning HIK SVM for image classification. First, we propose intersection coordinate descent (ICD), a deterministic and scalable HIK SVM solver. ICD is much faster than, and has similar accuracies to, general purpose SVM solvers and other fast HIK SVM training methods. We also extend ICD to the efficient training of a broader family of kernels. Second, we show an important empirical observation that ICD is not sensitive to the C parameter in SVM, and we provide some theoretical analyses to explain this observation. ICD achieves high accuracies in many problems, using its default parameters. This is an attractive property for practitioners, because many image processing tasks are too large to choose SVM parameters using cross-validation.
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.10
2015-08-25
Geostationary Ocean Color Imager (GOCI) sensors. AOPS enables exploitation of multiple space-borne ocean color satellite sensors to provide optical...package as well as from the Geostationary Ocean Color Imager (GOCI) sensor aboard the Communication Ocean and Meteorological Satellite (COMS) satellite... GEOstationary Coastal and Air Pollution Events (GEO-CAPE) mission and provided to NRL courtesy of Mike Ondrusek and Zhongping Lee. AOP and IOP data were
Treder, Maximilian; Lauermann, Jost Lennart; Eter, Nicole
2018-02-01
Our purpose was to use deep learning for the automated detection of age-related macular degeneration (AMD) in spectral domain optical coherence tomography (SD-OCT). A total of 1112 cross-section SD-OCT images of patients with exudative AMD and a healthy control group were used for this study. In the first step, an open-source multi-layer deep convolutional neural network (DCNN), which was pretrained with 1.2 million images from ImageNet, was trained and validated with 1012 cross-section SD-OCT scans (AMD: 701; healthy: 311). During this procedure training accuracy, validation accuracy and cross-entropy were computed. The open-source deep learning framework TensorFlow™ (Google Inc., Mountain View, CA, USA) was used to accelerate the deep learning process. In the last step, a created DCNN classifier, using the information of the above mentioned deep learning process, was tested in detecting 100 untrained cross-section SD-OCT images (AMD: 50; healthy: 50). Therefore, an AMD testing score was computed: 0.98 or higher was presumed for AMD. After an iteration of 500 training steps, the training accuracy and validation accuracies were 100%, and the cross-entropy was 0.005. The average AMD scores were 0.997 ± 0.003 in the AMD testing group and 0.9203 ± 0.085 in the healthy comparison group. The difference between the two groups was highly significant (p < 0.001). With a deep learning-based approach using TensorFlow™, it is possible to detect AMD in SD-OCT with high sensitivity and specificity. With more image data, an expansion of this classifier for other macular diseases or further details in AMD is possible, suggesting an application for this model as a support in clinical decisions. Another possible future application would involve the individual prediction of the progress and success of therapy for different diseases by automatically detecting hidden image information.
Audigier, Chloé; Mansi, Tommaso; Delingette, Hervé; Rapaka, Saikiran; Passerini, Tiziano; Mihalef, Viorel; Jolly, Marie-Pierre; Pop, Raoul; Diana, Michele; Soler, Luc; Kamen, Ali; Comaniciu, Dorin; Ayache, Nicholas
2017-09-01
We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. This enables an end-to-end preclinical validation framework that considers the available dataset.
Automated measurement of pressure injury through image processing.
Li, Dan; Mathews, Carol
2017-11-01
To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
Use of a vision model to quantify the significance of factors effecting target conspicuity
NASA Astrophysics Data System (ADS)
Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.
2006-05-01
When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.
Local figure-ground cues are valid for natural images.
Fowlkes, Charless C; Martin, David R; Malik, Jitendra
2007-06-08
Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Validation results of satellite mock-up capturing experiment using nets
NASA Astrophysics Data System (ADS)
Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil
2017-05-01
The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.
ERIC Educational Resources Information Center
Plum, Terry; Smalley, Topsy N.
1994-01-01
Discussion of humanities research focuses on the humanist patron as author of the text. Highlights include the research process; style of expression; interpretation; multivocality; reflexivity; social validation; repatriation; the image of the library for the author; patterns of searching behavior; and reference librarian responses. (37…
WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.
Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X
2011-03-30
We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.
MOSAIC - A space-multiplexing technique for optical processing of large images
NASA Technical Reports Server (NTRS)
Athale, Ravindra A.; Astor, Michael E.; Yu, Jeffrey
1993-01-01
A technique for Fourier processing of images larger than the space-bandwidth products of conventional or smart spatial light modulators and two-dimensional detector arrays is described. The technique involves a spatial combination of subimages displayed on individual spatial light modulators to form a phase-coherent image, which is subsequently processed with Fourier optical techniques. Because of the technique's similarity with the mosaic technique used in art, the processor used is termed an optical MOSAIC processor. The phase accuracy requirements of this system were studied by computer simulation. It was found that phase errors of less than lambda/8 did not degrade the performance of the system and that the system was relatively insensitive to amplitude nonuniformities. Several schemes for implementing the subimage combination are described. Initial experimental results demonstrating the validity of the mosaic concept are also presented.
Ship Speed Retrieval From Single Channel TerraSAR-X Data
NASA Astrophysics Data System (ADS)
Soccorsi, Matteo; Lehner, Susanne
2010-04-01
A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.
NASA Astrophysics Data System (ADS)
Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.
2011-04-01
Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.
Rasmussen, Trine Bernholdt; Berg, Selina Kikkenborg; Dixon, Jane; Moons, Philip; Konradsen, Hanne
2016-12-01
Negative body perception has been reported in a number of patient populations. No instrument in Danish for measuring body image-related concerns has been available. Without such an instrument, understanding of the phenomenon in Danish-speaking populations is limited. The purpose of the study was thus to translate and validate a Danish version of the Body Image Quality of Life Inventory (BIQLI), in order to obtain a valid instrument applicable for healthcare research. The study consisted of two phases: (i) instrument adaptation, including forward and back translation, expert committee comparisons and cognitive interviewing, and (ii) empirical testing of the Danish version (BIQLI-DA) with subsequent psychometric evaluation. Hypothesised correlations to other measures, including body mass index (BMI), Medical Outcome Short Form-8 (SF-8), Patient Health Questionnaire-9 (PHQ-9), General Anxiety Disorder-7 and Symptom Check List-90-Revised (SCL-90-R ® ) were tested. In addition, exploratory factor structure analysis (EFA) and internal consistency on item and scale level were performed. The adapted instrument was found to be semantically sound, yet concerns about face validity did arise through cognitive interviews. Danish college students (n = 189, 65 men, M age = 21.1 years) participated in the piloting of the BIQLI-DA. Convergent construct validity was demonstrated through associations to related constructs. Exploratory factor analysis revealed a potential subscale structure. Finally, results showed a high internal consistency (Cronbach's alpha = 0.92). Support for the validity of the BIQLI-DA might have been strengthened by repeating cognitive interviews after layout alterations, by piloting the instrument on a larger sample. This study demonstrated tentative support for the validity of the Danish Body Image Quality of Life (BIQLI-DA) and found the measure to be reliable in terms of internal consistency. Further exploration of response processes and construct validity is needed. © 2016 Nordic College of Caring Science.
Development and validation of an open source quantification tool for DSC-MRI studies.
Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J
2015-03-01
This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.
In-flight edge response measurements for high-spatial-resolution remote sensing systems
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie
2002-09-01
In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Summary of KOMPSAT-5 Calibration and Validation
NASA Astrophysics Data System (ADS)
Yang, D.; Jeong, H.; Lee, S.; Kim, B.
2013-12-01
Korean Multi-Purpose Satellite 5 (KOMPSAT-5), equipped with high resolution X-band (9.66 GHz) Synthetic Aperture Radar (SAR), is planning to be launched on August 22, 2013. With the satellite's primary mission objective being providing Geographical Information System (GIS), Ocean monitoring and Land management, and Disaster and ENvironment monitoring (GOLDEN), it is expected that its applications for scientific research on geographical processes will be extensive. In order to meet its mission objective, the KOMPSAT-5 will provide three different kinds of SAR imaging modes; High Resolution Mode (1 m resolution, 5 km swath), Standard Mode (3 m resolution, 30 km swath), and Wide Swath Mode (20 m resolution, 100 km swath). The KOMPSAT-5 will be operated in a 550 km sun-synchronous, dawn- dusk orbit with a 28-day ground repeat cycle providing valuable image information on Earth surface day-or-night and even in bad weather condition. After successful launch of the satellite, it will go through Launch and Early Operation (LEOP) and In-Orbit Testing (IOT) period about for 6 months to carry out various tests on satellite bus and payload systems. The satellite bus system will be tested during the first 3 weeks after the launch focusing on the Attitude and Orbit Control Subsystem (AOCS) and Integrated GPS Occultation Receiver (IGOR) calibration. With the completion of bus system test, the SAR payload system will be calibrated during initial In-Flight check period (11 weeks) by the joint effort of Thales Alenia Space Italy (TAS-I) and Korea Aerospace Research Institute (KARI). The pointing and relative calibration will be carried out during this period by analyzing the doppler frequency and antenna beam pattern of reflected microwave signal from selected regions with uniform backscattering coefficients (e.g. Amazon rainforest). A dedicated SAR calibration, called primary calibration, will be allocated at the end of LEOP for 12 weeks to perform thorough calibration activities including pointing, relative and absolute calibration as well as geolocation accuracy determination. The absolute calibration will be accomplished by determining absolute radiometric accuracy using already deployed trihedral corner reflectors on calibration and validation sites located southeast from Ulaanbaatar, Mongolia. To establish a measure for the assess the final image products, geolocation accuracies of image products with different imaging modes will be determined by using deployed point targets and available Digital Terrain Model (DTM), and on different image processing levels. In summary, this paper will present calibration and validation activities performed during the LEOP and IOT of KOMPSAT-5. The methodology and procedure of calibration and validation will be explained as well as its results. Based on the results, the applications of SAR image products on geophysical processes will be also discussed.
Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwald, Martin
The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less
NASA Astrophysics Data System (ADS)
Miccoli, M.; Usai, A.; Tafuto, A.; Albertoni, A.; Togna, F.
2016-10-01
The propagation environment around airborne platforms may significantly degrade the performance of Electro-Optical (EO) self-protection systems installed onboard. To ensure the sufficient level of protection, it is necessary to understand that are the best sensors/effectors installation positions to guarantee that the aeromechanical turbulence, generated by the engine exhausts and the rotor downwash, does not interfere with the imaging systems normal operations. Since the radiation-propagation-in-turbulence is a hardly predictable process, it was proposed a high-level approach in which, instead of studying the medium under turbulence, the turbulence effects on the imaging systems processing are assessed by means of an equivalent statistical model representation, allowing a definition of a Turbulence index to classify different level of turbulence intensities. Hence, a general measurement methodology for the degradation of the imaging systems performance in turbulence conditions was developed. The analysis of the performance degradation started by evaluating the effects of turbulences with a given index on the image processing chain (i.e., thresholding, blob analysis). The processing in turbulence (PIT) index is then derived by combining the effects of the given turbulence on the different image processing primitive functions. By evaluating the corresponding PIT index for a sufficient number of testing directions, it is possible to map the performance degradation around the aircraft installation for a generic imaging system, and to identify the best installation position for sensors/effectors composing the EO self-protection suite.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Guha, Arindam; Singh, Vivek Kr.; Parveen, Reshma; Kumar, K. Vinod; Jeyaseelan, A. T.; Dhanamjaya Rao, E. N.
2013-04-01
Bauxite deposits of Jharkhand in India are resulted from the lateritization process and therefore are often associated with the laterites. In the present study, ASTER (Advanced Space borne Thermal Emission and Reflection Radiometer) image is processed to delineate bauxite rich pockets within the laterites. In this regard, spectral signatures of lateritic bauxite samples are analyzed in the laboratory with reference to the spectral features of gibbsite (main mineral constituent of bauxite) and goethite (main mineral constituent of laterite) in VNIR-SWIR (visible-near infrared and short wave infrared) electromagnetic domain. The analysis of spectral signatures of lateritic bauxite samples helps in understanding the differences in the spectral features of bauxites and laterites. Based on these differences; ASTER data based relative band depth and simple ratio images are derived for spatial mapping of the bauxites developed within the lateritic province. In order to integrate the complementary information of different index image, an index based principal component (IPC) image is derived to incorporate the correlative information of these indices to delineate bauxite rich pockets. The occurrences of bauxite rich pockets derived from density sliced IPC image are further delimited by the topographic controls as it has been observed that the major bauxite occurrences of the area are controlled by slope and altitude. In addition to above, IPC image is draped over the digital elevation model (DEM) to illustrate how bauxite rich pockets are distributed with reference to the topographic variability of the terrain. Bauxite rich pockets delineated in the IPC image are also validated based on the known mine occurrences and existing geological map of the bauxite. It is also conceptually validated based on the spectral similarity of the bauxite pixels delineated in the IPC image with the ASTER convolved laboratory spectra of bauxite samples.
Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.
Shen, Shijian; Nie, Xin; Zhang, Xinggan
2018-02-03
Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.
Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine
2016-05-01
Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis progression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Companion diagnostics and molecular imaging-enhanced approaches for oncology clinical trials.
Van Heertum, Ronald L; Scarimbolo, Robert; Ford, Robert; Berdougo, Eli; O'Neal, Michael
2015-01-01
In the era of personalized medicine, diagnostic approaches are helping pharmaceutical and biotechnology sponsors streamline the clinical trial process. Molecular assays and diagnostic imaging are routinely being used to stratify patients for treatment, monitor disease, and provide reliable early clinical phase assessments. The importance of diagnostic approaches in drug development is highlighted by the rapidly expanding global cancer diagnostics market and the emergent attention of regulatory agencies worldwide, who are beginning to offer more structured platforms and guidance for this area. In this paper, we highlight the key benefits of using companion diagnostics and diagnostic imaging with a focus on oncology clinical trials. Nuclear imaging using widely available radiopharmaceuticals in conjunction with molecular imaging of oncology targets has opened the door to more accurate disease assessment and the modernization of standard criteria for the evaluation, staging, and treatment responses of cancer patients. Furthermore, the introduction and validation of quantitative molecular imaging continues to drive and optimize the field of oncology diagnostics. Given their pivotal role in disease assessment and treatment, the validation and commercialization of diagnostic tools will continue to advance oncology clinical trials, support new oncology drugs, and promote better patient outcomes.
Venkatasubramanian, Ganesan; Puthumana, Dawn Thomas K.; Jayakumar, Peruvumba N.; Gangadhar, B. N.
2010-01-01
Background: Emotion processing abnormalities are considered among the core deficits in schizophrenia. Subjects at high risk (HR) for schizophrenia also show these deficits. Structural neuroimaging studies examining unaffected relatives at high risk for schizophrenia have demonstrated neuroanatomical abnormalities involving neo-cortical and sub-cortical brain regions related to emotion processing. The brain functional correlates of emotion processing in these HR subjects in the context of ecologically valid, real-life dynamic images using functional Magnetic Resonance Imaging (fMRI) has not been examined previously. Aim: To examine the neurohemodynamic abnormalities during emotion processing in unaffected subjects at high risk for schizophrenia in comparison with age-, sex-, handedness- and education-matched healthy controls, using fMRI. Materials and Methods: HR subjects for schizophrenia (n=17) and matched healthy controls (n=16) were examined. The emotion processing of fearful facial expression was examined using a culturally appropriate and valid tool for Indian subjects. The fMRI was performed in a 1.5-T scanner during an implicit emotion processing paradigm. The fMRI analyses were performed using the Statistical Parametric Mapping 2 (SPM2) software. Results: HR subjects had significantly reduced brain activations in left insula, left medial frontal gyrus, left inferior frontal gyrus, right cingulate gyrus, right precentral gyrus and right inferior parietal lobule. Hypothesis-driven region-of-interest analysis revealed hypoactivation of right amygdala in HR subjects. Conclusions: Study findings suggest that neurohemodynamic abnormalities involving limbic and frontal cortices could be potential indicators for increased vulnerability toward schizophrenia. The clinical utility of these novel findings in predicting the development of psychosis needs to be evaluated. PMID:21267363
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
NASA Astrophysics Data System (ADS)
Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping
2014-01-01
Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
Automated image analysis for quantification of reactive oxygen species in plant leaves.
Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta
2016-10-15
The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.
Dias, Roberto A; Gonçalves, Bruno P; da Rocha, Joana F; da Cruz E Silva, Odete A B; da Silva, Augusto M F; Vieira, Sandra I
2017-12-01
Neurons are specialized cells of the Central Nervous System whose function is intricately related to the neuritic network they develop to transmit information. Morphological evaluation of this network and other neuronal structures is required to establish relationships between neuronal morphology and function, and may allow monitoring physiological and pathophysiologic alterations. Fluorescence-based microphotographs are the most widely used in cellular bioimaging, but phase contrast (PhC) microphotographs are easier to obtain, more affordable, and do not require invasive, complicated and disruptive techniques. Despite the various freeware tools available for fluorescence-based images analysis, few exist that can tackle the more elusive and harder-to-analyze PhC images. To surpass this, an interactive semi-automated image processing workflow was developed to easily extract relevant information (e.g. total neuritic length, average cell body area) from both PhC and fluorescence neuronal images. This workflow, named 'NeuronRead', was developed in the form of an ImageJ macro. Its robustness and adaptability were tested and validated on rat cortical primary neurons under control and differentiation inhibitory conditions. Validation included a comparison to manual determinations and to a golden standard freeware tool for fluorescence image analysis. NeuronRead was subsequently applied to PhC images of neurons at distinct differentiation days and exposed or not to DAPT, a pharmacological inhibitor of the γ-secretase enzyme, which cleaves the well-known Alzheimer's amyloid precursor protein (APP) and the Notch receptor. Data obtained confirms a neuritogenic regulatory role for γ-secretase products and validates NeuronRead as a time- and cost-effective useful monitoring tool. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ungermann, J.; Blank, J.; Dick, M.; Ebersoldt, A.; Friedl-Vallon, F.; Giez, A.; Guggenmoser, T.; Höpfner, M.; Jurkat, T.; Kaufmann, M.; Kaufmann, S.; Kleinert, A.; Krämer, M.; Latzko, T.; Oelhaf, H.; Olchewski, F.; Preusse, P.; Rolf, C.; Schillings, J.; Suminska-Ebersoldt, O.; Tan, V.; Thomas, N.; Voigt, C.; Zahn, A.; Zöger, M.; Riese, M.
2015-06-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is an airborne infrared limb imager combining a two-dimensional infrared detector with a Fourier transform spectrometer. It was operated aboard the new German Gulfstream G550 High Altitude LOng Range (HALO) research aircraft during the Transport And Composition in the upper Troposphere/lowermost Stratosphere (TACTS) and Earth System Model Validation (ESMVAL) campaigns in summer 2012. This paper describes the retrieval of temperature and trace gas (H2O, O3, HNO3) volume mixing ratios from GLORIA dynamics mode spectra that are spectrally sampled every 0.625 cm-1. A total of 26 integrated spectral windows are employed in a joint fit to retrieve seven targets using consecutively a fast and an accurate tabulated radiative transfer model. Typical diagnostic quantities are provided including effects of uncertainties in the calibration and horizontal resolution along the line of sight. Simultaneous in situ observations by the Basic Halo Measurement and Sensor System (BAHAMAS), the Fast In-situ Stratospheric Hygrometer (FISH), an ozone detector named Fairo, and the Atmospheric chemical Ionization Mass Spectrometer (AIMS) allow a validation of retrieved values for three flights in the upper troposphere/lowermost stratosphere region spanning polar and sub-tropical latitudes. A high correlation is achieved between the remote sensing and the in situ trace gas data, and discrepancies can to a large extent be attributed to differences in the probed air masses caused by different sampling characteristics of the instruments. This 1-D processing of GLORIA dynamics mode spectra provides the basis for future tomographic inversions from circular and linear flight paths to better understand selected dynamical processes of the upper troposphere and lowermost stratosphere.
NASA Astrophysics Data System (ADS)
Ungermann, J.; Blank, J.; Dick, M.; Ebersoldt, A.; Friedl-Vallon, F.; Giez, A.; Guggenmoser, T.; Höpfner, M.; Jurkat, T.; Kaufmann, M.; Kaufmann, S.; Kleinert, A.; Krämer, M.; Latzko, T.; Oelhaf, H.; Olchewski, F.; Preusse, P.; Rolf, C.; Schillings, J.; Suminska-Ebersoldt, O.; Tan, V.; Thomas, N.; Voigt, C.; Zahn, A.; Zöger, M.; Riese, M.
2014-12-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is an airborne infrared limb-imager combining a two-dimensional infrared detector with a Fourier transform spectrometer. It was operated aboard the new German Gulfstream G550 research aircraft HALO during the Transport And Composition in the upper Troposphere/lowermost Stratosphere (TACTS) and Earth System Model Validation (ESMVAL) campaigns in summer 2012. This paper describes the retrieval of temperature and trace gas (H2O, O3, HNO3) volume mixing ratios from GLORIA dynamics mode spectra. 26 integrated spectral windows are employed in a joint fit to retrieve seven targets using consecutively a fast and an accurate tabulated radiative transfer model. Typical diagnostic quantities are provided including effects of uncertainties in the calibration and horizontal resolution along the line-of-sight. Simultaneous in-situ observations by the BAsic HALO Measurement And Sensor System (BAHAMAS), the Fast In-Situ Stratospheric Hygrometer (FISH), FAIRO, and the Atmospheric chemical Ionization Mass Spectrometer (AIMS) allow a validation of retrieved values for three flights in the upper troposphere/lowermost stratosphere region spanning polar and sub-tropical latitudes. A high correlation is achieved between the remote sensing and the in-situ trace gas data, and discrepancies can to a large fraction be attributed to differences in the probed air masses caused by different sampling characteristics of the instruments. This 1-D processing of GLORIA dynamics mode spectra provides the basis for future tomographic inversions from circular and linear flight paths to better understand selected dynamical processes of the upper troposphere and lowermost stratosphere.
Validating a new methodology for optical probe design and image registration in fNIRS studies
Wijeakumar, Sobanawartiny; Spencer, John P.; Bohache, Kevin; Boas, David A.; Magnotta, Vincent A.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an imaging technique that relies on the principle of shining near-infrared light through tissue to detect changes in hemodynamic activation. An important methodological issue encountered is the creation of optimized probe geometry for fNIRS recordings. Here, across three experiments, we describe and validate a processing pipeline designed to create an optimized, yet scalable probe geometry based on selected regions of interest (ROIs) from the functional magnetic resonance imaging (fMRI) literature. In experiment 1, we created a probe geometry optimized to record changes in activation from target ROIs important for visual working memory. Positions of the sources and detectors of the probe geometry on an adult head were digitized using a motion sensor and projected onto a generic adult atlas and a segmented head obtained from the subject's MRI scan. In experiment 2, the same probe geometry was scaled down to fit a child's head and later digitized and projected onto the generic adult atlas and a segmented volume obtained from the child's MRI scan. Using visualization tools and by quantifying the amount of intersection between target ROIs and channels, we show that out of 21 ROIs, 17 and 19 ROIs intersected with fNIRS channels from the adult and child probe geometries, respectively. Further, both the adult atlas and adult subject-specific MRI approaches yielded similar results and can be used interchangeably. However, results suggest that segmented heads obtained from MRI scans be used for registering children's data. Finally, in experiment 3, we further validated our processing pipeline by creating a different probe geometry designed to record from target ROIs involved in language and motor processing. PMID:25705757
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination
NASA Astrophysics Data System (ADS)
Spigulis, Janis; Oshina, Ilze; Berzina, Anna; Bykov, Alexander
2017-09-01
Chromophore distribution maps are useful tools for skin malformation severity assessment and for monitoring of skin recovery after burns, surgeries, and other interactions. The chromophore maps can be obtained by processing several spectral images of skin, e.g., captured by hyperspectral or multispectral cameras during seconds or even minutes. To avoid motion artifacts and simplify the procedure, a single-snapshot technique for mapping melanin, oxyhemoglobin, and deoxyhemoglobin of in-vivo skin by a smartphone under simultaneous three-wavelength (448-532-659 nm) laser illumination is proposed and examined. Three monochromatic spectral images related to the illumination wavelengths were extracted from the smartphone camera RGB image data set with respect to crosstalk between the RGB detection bands. Spectral images were further processed accordingly to Beer's law in a three chromophore approximation. Photon absorption path lengths in skin at the exploited wavelengths were estimated by means of Monte Carlo simulations. The technique was validated clinically on three kinds of skin lesions: nevi, hemangiomas, and seborrheic keratosis. Design of the developed add-on laser illumination system, image-processing details, and the results of clinical measurements are presented and discussed.
Image encryption using a synchronous permutation-diffusion technique
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi; Altameem, Ayman; Lee, Malrey
2017-03-01
In the past decade, the interest on digital images security has been increased among scientists. A synchronous permutation and diffusion technique is designed in order to protect gray-level image content while sending it through internet. To implement the proposed method, two-dimensional plain-image is converted to one dimension. Afterward, in order to reduce the sending process time, permutation and diffusion steps for any pixel are performed in the same time. The permutation step uses chaotic map and deoxyribonucleic acid (DNA) to permute a pixel, while diffusion employs DNA sequence and DNA operator to encrypt the pixel. Experimental results and extensive security analyses have been conducted to demonstrate the feasibility and validity of this proposed image encryption method.
A statistical model for radar images of agricultural scenes
NASA Technical Reports Server (NTRS)
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
Geometric modeling of the temporal bone for cochlea implant simulation
NASA Astrophysics Data System (ADS)
Todd, Catherine A.; Naghdy, Fazel; O'Leary, Stephen
2004-05-01
The first stage in the development of a clinically valid surgical simulator for training otologic surgeons in performing cochlea implantation is presented. For this purpose, a geometric model of the temporal bone has been derived from a cadaver specimen using the biomedical image processing software package Analyze (AnalyzeDirect, Inc) and its three-dimensional reconstruction is examined. Simulator construction begins with registration and processing of a Computer Tomography (CT) medical image sequence. Important anatomical structures of the middle and inner ear are identified and segmented from each scan in a semi-automated threshold-based approach. Linear interpolation between image slices produces a three-dimensional volume dataset: the geometrical model. Artefacts are effectively eliminated using a semi-automatic seeded region-growing algorithm and unnecessary bony structures are removed. Once validated by an Ear, Nose and Throat (ENT) specialist, the model may be imported into the Reachin Application Programming Interface (API) (Reachin Technologies AB) for visual and haptic rendering associated with a virtual mastoidectomy. Interaction with the model is realized with haptics interfacing, providing the user with accurate torque and force feedback. Electrode array insertion into the cochlea will be introduced in the final stage of design.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Minimal camera networks for 3D image based modeling of cultural heritage objects.
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-03-25
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.
Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-01-01
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718
Van Neste, Dominique
2014-01-01
The words "hair growth" frequently encompass many aspects other than just growth. Report on a validation method for precise non-invasive measurement of thickness together with linear hair growth rates of individual hair fibres. To verify the possible correlation between thickness and linear growth rate of scalp hair in male pattern hair loss as compared with healthy male controls. To document the process of validation of hair growth measurement from in vivo image capturing and manual processing, followed by computer assisted image analysis. We analysed 179 paired images obtained with the contrast-enhanced-phototrichogram method with exogen collection (CE-PTG-EC) in 13 healthy male controls and in 87 men with male pattern hair loss (MPHL). There was a global positive correlation between thickness and growth rate (ANOVA; p<0.0001) and a statistically significantly (ANOVA; p<0.0005) slower growth rate in MPHL as compared with equally thick hairs from controls. Finally, the growth rate recorded in the more severe patterns was significantly (ANOVA; P ≤ 0.001) reduced compared with equally thick hair from less severely affected MPHL or controls subjects. Reduced growth rate, together with thinning and shortening of the anagen phase duration in MPHL might contribute together to the global impression of decreased hair volume on the top of the head. Amongst other structural and functional parameters characterizing hair follicle regression, linear hair growth rate warrants further investigation, as it may be relevant in terms of self-perception of hair coverage, quantitative diagnosis and prognostic factor of the therapeutic response.
Smartphone based automatic organ validation in ultrasound video.
Vaish, Pallavi; Bharath, R; Rajalakshmi, P
2017-07-01
Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.
Paz, Concepción; Conde, Marcos; Porteiro, Jacobo; Concheiro, Miguel
2017-01-01
This work introduces the use of machine vision in the massive bubble recognition process, which supports the validation of boiling models involving bubble dynamics, as well as nucleation frequency, active site density and size of the bubbles. The two algorithms presented are meant to be run employing quite standard images of the bubbling process, recorded in general-purpose boiling facilities. The recognition routines are easily adaptable to other facilities if a minimum number of precautions are taken in the setup and in the treatment of the information. Both the side and front projections of subcooled flow-boiling phenomenon over a plain plate are covered. Once all of the intended bubbles have been located in space and time, the proper post-process of the recorded data become capable of tracking each of the recognized bubbles, sketching their trajectories and size evolution, locating the nucleation sites, computing their diameters, and so on. After validating the algorithm’s output against the human eye and data from other researchers, machine vision systems have been demonstrated to be a very valuable option to successfully perform the recognition process, even though the optical analysis of bubbles has not been set as the main goal of the experimental facility. PMID:28632158
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes
Sampaio, Renato Coral; Vargas, José A. R.
2018-01-01
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.
Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi
2018-03-23
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
Processing, Cataloguing and Distribution of Uas Images in Near Real Time
NASA Astrophysics Data System (ADS)
Runkel, I.
2013-08-01
Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images can be checked and interpreted in near real-time. For sensible areas it gives you the possibility to inform remote decision makers or interpretation experts in order to provide them situations awareness, wherever they are. For monitoring and inspection tasks it speeds up the process of data capture and data interpretation. The fully automated workflow of data pre-processing, data georeferencing, data cataloguing and data dissemination in near real time was developed based on the Intergraph products ERDAS IMAGINE, ERDAS APOLLO and GEOSYSTEMS METAmorph!IT. It is offered as adaptable solution by GEOSYSTEMS GmbH.
Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais
2017-01-01
Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849
Real-time photo-magnetic imaging.
Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin
2016-10-01
We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.
Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.
Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin
2016-01-01
Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
Quantitative image quality evaluation of MR images using perceptual difference models
Miao, Jun; Huo, Donglai; Wilson, David L.
2008-01-01
The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
Dixit, Sudeepa; Fox, Mark; Pal, Anupam
2014-01-01
Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229
WFIRST: Update on the Coronagraph Science Requirements
NASA Astrophysics Data System (ADS)
Douglas, Ewan S.; Cahoy, Kerri; Carlton, Ashley; Macintosh, Bruce; Turnbull, Margaret; Kasdin, Jeremy; WFIRST Coronagraph Science Investigation Teams
2018-01-01
The WFIRST Coronagraph instrument (CGI) will enable direct imaging and low resolution spectroscopy of exoplanets in reflected light and imaging polarimetry of circumstellar disks. The CGI science investigation teams were tasked with developing a set of science requirements which advance our knowledge of exoplanet occurrence and atmospheric composition, as well as the composition and morphology of exozodiacal debris disks, cold Kuiper Belt analogs, and protoplanetary systems. We present the initial content, rationales, validation, and verification plans for the WFIRST CGI, informed by detailed and still-evolving instrument and observatory performance models. We also discuss our approach to the requirements development and management process, including the collection and organization of science inputs, open source approach to managing the requirements database, and the range of models used for requirements validation. These tools can be applied to requirements development processes for other astrophysical space missions, and may ease their management and maintenance. These WFIRST CGI science requirements allow the community to learn about and provide insights and feedback on the expected instrument performance and science return.
ESARR: enhanced situational awareness via road sign recognition
NASA Astrophysics Data System (ADS)
Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.
2010-04-01
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
NASA Astrophysics Data System (ADS)
Pesaresi, Martino; Ouzounis, Georgios K.; Gueguen, Lionel
2012-06-01
A new compact representation of dierential morphological prole (DMP) vector elds is presented. It is referred to as the CSL model and is conceived to radically reduce the dimensionality of the DMP descriptors. The model maps three characteristic parameters, namely scale, saliency and level, into the RGB space through a HSV transform. The result is a a medium abstraction semantic layer used for visual exploration, image information mining and pattern classication. Fused with the PANTEX built-up presence index, the CSL model converges to an approximate building footprint representation layer in which color represents building class labels. This process is demonstrated on the rst high resolution (HR) global human settlement layer (GHSL) computed from multi-modal HR and VHR satellite images. Results of the rst massive processing exercise involving several thousands of scenes around the globe are reported along with validation gures.
Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O
2014-01-01
Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.
Davidson, Benjamin; Kalitzeos, Angelos; Carroll, Joseph; Dubra, Alfredo; Ourselin, Sebastien; Michaelides, Michel; Bergeles, Christos
2018-05-21
We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method's robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.
Kawooya, Michael G.; Pariyo, George; Malwadde, Elsie Kiguli; Byanyima, Rosemary; Kisembo, Harrient
2012-01-01
Objectives: Uganda, has limited health resources and improving performance of personnel involved in imaging is necessary for efficiency. The objectives of the study were to develop and pilot imaging user performance indices, document non-tangible aspects of performance, and propose ways of improving performance. Materials and Methods: This was a cross-sectional survey employing triangulation methodology, conducted in Mulago National Referral Hospital over a period of 3 years from 2005 to 2008. The qualitative study used in-depth interviews, focus group discussions, and self-administered questionnaires, to explore clinicians’ and radiologists’ performancerelated views. Results: The study came up with following indices: appropriate service utilization (ASU), appropriateness of clinician's nonimaging decisions (ANID), and clinical utilization of imaging results (CUI). The ASU, ANID, and CUI were: 94%, 80%, and 97%, respectively. The clinician's requisitioning validity was high (positive likelihood ratio of 10.6) contrasting with a poor validity for detecting those patients not needing imaging (negative likelihood ratio of 0.16). Some requisitions were inappropriate and some requisition and reports lacked detail, clarity, and precision. Conclusion: Clinicians perform well at imaging requisition-decisions but there are issues in imaging requisitioning and reporting that need to be addressed to improve performance. PMID:23230543
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
An automated dose tracking system for adaptive radiation therapy.
Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J
2018-02-01
The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.
Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne
2017-02-15
In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.
Streak detection and analysis pipeline for optical images
NASA Astrophysics Data System (ADS)
Virtanen, J.; Granvik, M.; Torppa, J.; Muinonen, K.; Poikonen, J.; Lehti, J.; Säntti, T.; Komulainen, T.; Flohrer, T.
2014-07-01
We describe a novel data processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data to support the development and validation of population models, and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. We focus on the low signal-to-noise (SNR) detection of objects with high angular velocities, resulting in long and faint object trails, or streaks, in the optical images. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and, particularly for satellites, within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a 'track-before-detect' problem, resulting in streaks of arbitrary lengths. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, algorithms are not readily available yet. In the ESA-funded StreakDet (Streak detection and astrometric reduction) project, we develop and evaluate an automated processing pipeline applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The algorithmic flow starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. For the low-SNR extraction of objects, we put forward an approach which does not rely on a priori information, such as the object velocities, a typical assumption in earlier implementations. Our algorithm is based on local grayscale mean difference evaluation, followed by a threshold operation and spatial filtering of black-and-white (1-bit) data to remove stars and other non-streak features. For long streaks, the challenge is to extract position information and related registered epochs with sufficient precision. Moreover, satellite streaks can show up in complex morphologies because of their fast, and often irregular lightcurve variations. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. PSF fitting for point-like vs. long, disintegrated streaks). Finally, to validate the single-image detections, the processing is finalized by orbital analysis using our statistical inverse methods (see, Muinonen et al., this conference), resulting in preliminary orbital classification (e.g., Earth-bound vs. non-Earth-bound orbits) for the detected streaks.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Constraint processing in our extensible language for cooperative imaging system
NASA Astrophysics Data System (ADS)
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Mraity, Hussien A A B; England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
Objective: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Methods: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. Results: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). Conclusion: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. Advances in knowledge: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality. PMID:26943836
Collection of LAI and FPAR Data Over The Terra Core Sites
NASA Technical Reports Server (NTRS)
Myneni, Ranga B.; Knjazihhin, J.; Tian, Y.; Wang, Y.
2001-01-01
The objective of our effort was to collect and archive data on LAI (leaf area index) and FPAR (Fraction of Photosynthetically active Radiation absorbed by vegetation) at the EOS Core validation sites as well as to validate and evaluate global fields of LAI and FPAR derived from atmospherically corrected MODIS (Moderate Resolution Imaging Spectrometer) surface reflectance data by comparing these fields with the EOS Core validation data set. The above has been accomplished by: (a) the participation in selected field campaigns within the EOS Validation Program; (b) the processing of the collected data so that suitable comparison between field measurements and the MODIS LAI/FPAR fields can be made; (c) the comparison of the MODAS LAI/FRAM fields with the EOS Terra Core validation data set.
MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avanaki, A.
2016-06-15
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graff, C.
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
NASA Technical Reports Server (NTRS)
Lee, T-H.; Burnside, W. D.
1992-01-01
Inverse Synthetic Aperture Radar (ISAR) images for a 32 in long and 19 in wide model aircraft are documented. Both backscattered and bistatic scattered fields of this model aircraft were measured in the OSU-ESL compact range to obtain these images. The scattered fields of the target were measured for frequencies from 2 to 18 GHz with a 10 MHz increment and for full 360 deg azimuth rotation angles with a 0.2 deg step. For the bistatic scattering measurement, the compact range was used as the transmitting antenna; while, a broad band AEL double ridge horn was used as the receiving antenna. Bistatic angles of 90 deg and 135 deg were measured. Due to the size of the chamber and target, the receiving antenna was in the near field of the target; nevertheless, the image processing algorithm was valid for this case.
Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.
2016-01-01
Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973
Blurred Star Image Processing for Star Sensors under Dynamic Conditions
Zhang, Weina; Quan, Wei; Guo, Lei
2012-01-01
The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666
Preliminary study of rib articulated model based on dynamic fluoroscopy images
NASA Astrophysics Data System (ADS)
Villard, Pierre-Frederic; Escamilla, Pierre; Kerrien, Erwan; Gorges, Sebastien; Trousset, Yves; Berger, Marie-Odile
2014-03-01
We present in this paper a preliminary study of rib motion tracking during Interventional Radiology (IR) fluoroscopy guided procedures. It consists in providing a physician with moving rib three-dimensional (3D) models projected in the fluoroscopy plane during a treatment. The strategy is to help to quickly recognize the target and the no-go areas i.e. the tumor and the organs to avoid. The method consists in i) elaborating a kinematic model of each rib from a preoperative computerized tomography (CT) scan, ii) processing the on-line fluoroscopy image and iii) optimizing the parameters of the kinematic law such as the transformed 3D rib projected on the medical image plane fit well with the previously processed image. The results show a visually good rib tracking that has been quantitatively validated by showing a periodic motion as well as a good synchronism between ribs.
Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain
2013-08-01
In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region
NASA Technical Reports Server (NTRS)
Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.
2004-01-01
Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.
Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan
2015-01-01
Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.
Sorting Olive Batches for the Milling Process Using Image Processing
Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan
2015-01-01
The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729
Remote Sensing and Imaging Physics
2012-03-07
Model Analysis Process Wire-frame Shape Model a s s u m e d a p rio ri k n o w le d g e No material BRDF library employed in retrieval...a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 07 MAR 2012 2. REPORT TYPE 3. DATES COVERED...imaging estimation problems Allows properties of local maxima to be derived from the Kolmogorov model of atmospheric turbulence: Each speckle
Fast Vessel Detection in Gaofen-3 SAR Images with Ultrafine Strip-Map Mode
Liu, Lei; Qiu, Xiaolan; Lei, Bin
2017-01-01
This study aims to detect vessels with lengths ranging from about 70 to 300 m, in Gaofen-3 (GF-3) SAR images with ultrafine strip-map (UFS) mode as fast as possible. Based on the analysis of the characteristics of vessels in GF-3 SAR imagery, an effective vessel detection method is proposed in this paper. Firstly, the iterative constant false alarm rate (CFAR) method is employed to detect the potential ship pixels. Secondly, the mean-shift operation is applied on each potential ship pixel to identify the candidate target region. During the mean-shift process, we maintain a selection matrix recording which pixels can be taken, and these pixels are called as the valid points of the candidate target. The l1 norm regression is used to extract the principal axis and detect the valid points. Finally, two kinds of false alarms, the bright line and the azimuth ambiguity, are removed by comparing the valid area of the candidate target with a pre-defined value and computing the displacement between the true target and the corresponding replicas respectively. Experimental results on three GF-3 SAR images with UFS mode demonstrate the effectiveness and efficiency of the proposed method. PMID:28678197
Overview of the Joint NASA ISRO Imaging Spectroscopy Science Campaign in India
NASA Astrophysics Data System (ADS)
Green, R. O.; Bhattacharya, B. K.; Eastwood, M. L.; Saxena, M.; Thompson, D. R.; Sadasivarao, B.
2016-12-01
In the period from December 2015 to March 2016 the Airborne Visible-Infrared Imaging Spectrometer Next Generation (AVIRIS-NG) was deployed to India for a joint NASA ISRO science campaign. This campaign was conceived to provide first of their kind high fidelity imaging spectroscopy measurements of a diverse set of Asian environments for science and applications research. During this campaign measurements were acquired for 57 high priority sites that have objectives spanning: snow/ice of the Himalaya; coastal habitats and water quality; mangrove forests; soils; dry and humid forests; hydrocarbon alteration; mineralogy; agriculture; urban materials; atmospheric properties; and calibration/validation. Measurements from the campaign have been processed to at-instrument spectral radiance and atmospherically corrected surface reflectance. New AVIRIS-NG algorithms for retrieval of vegetation canopy water and for estimation of the fractions of photosynthetic, non-photosynthetic vegetation have been tested and evaluated on these measurements. An inflight calibration validation experiment was performed on the 11thof December 2015 in Hyderabad to assess the spectral and radiometric calibration of AVIRIS-NG in the flight environment. We present an overview of the campaign, calibration and validation results, and initial science analysis of a subset of these unique and diverse data sets.
Applications of LANDSAT data to the integrated economic development of Mindoro, Phillipines
NASA Technical Reports Server (NTRS)
Wagner, T. W.; Fernandez, J. C.
1977-01-01
LANDSAT data is seen as providing essential up-to-date resource information for the planning process. LANDSAT data of Mindoro Island in the Philippines was processed to provide thematic maps showing patterns of agriculture, forest cover, terrain, wetlands and water turbidity. A hybrid approach using both supervised and unsupervised classification techniques resulted in 30 different scene classes which were subsequently color-coded and mapped at a scale of 1:250,000. In addition, intensive image analysis is being carried out in evaluating the images. The images, maps, and aerial statistics are being used to provide data to seven technical departments in planning the economic development of Mindoro. Multispectral aircraft imagery was collected to compliment the application of LANDSAT data and validate the classification results.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images
2009-12-01
Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design
NASA Astrophysics Data System (ADS)
Strippoli, L. S.; Gonzalez-Arjona, D. G.
2018-04-01
GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.
Guise, Catarina; Fernandes, Margarida M; Nóbrega, João M; Pathak, Sudhir; Schneider, Walter; Fangueiro, Raul
2016-11-09
Current brain imaging methods largely fail to provide detailed information about the location and severity of axonal injuries and do not anticipate recovery of the patients with traumatic brain injury. High-definition fiber tractography appears as a novel imaging modality based on water motion in the brain that allows for direct visualization and quantification of the degree of axons damage, thus predicting the functional deficits due to traumatic axonal injury and loss of cortical projections. This neuroimaging modality still faces major challenges because it lacks a "gold standard" for the technique validation and respective quality control. The present work aims to study the potential of hollow polypropylene yarns to mimic human white matter axons and construct a brain phantom for the calibration and validation of brain diffusion techniques based on magnetic resonance imaging, including high-definition fiber tractography imaging. Hollow multifilament polypropylene yarns were produced by melt-spinning process and characterized in terms of their physicochemical properties. Scanning electronic microscopy images of the filaments cross section has shown an inner diameter of approximately 12 μm, confirming their appropriateness to mimic the brain axons. The chemical purity of polypropylene yarns as well as the interaction between the water and the filament surface, important properties for predicting water behavior and diffusion inside the yarns, were also evaluated. Restricted and hindered water diffusion was confirmed by fluorescence microscopy. Finally, the yarns were magnetic resonance imaging scanned and analyzed using high-definition fiber tractography, revealing an excellent choice of these hollow polypropylene structures for simulation of the white matter brain axons and their suitability for constructing an accurate brain phantom.
Hansen, Hendrik H G; de Borst, Gert Jan; Bots, Michiel L; Moll, Frans L; Pasterkamp, Gerard; de Korte, Chris L
2016-11-01
Carotid plaque rupture is a major cause of stroke. Key issue for risk stratification is early identification of rupture-prone plaques. A noninvasive technique, compound ultrasound strain imaging, was developed providing high-resolution radial deformation/strain images of atherosclerotic plaques. This study aims at in vivo validation of compound ultrasound strain imaging in patients by relating the measured strains to typical features of vulnerable plaques derived from histology after carotid endarterectomy. Strains were measured in 34 severely stenotic (>70%) carotid arteries at the culprit lesion site within 48 hours before carotid endarterectomy. In all cases, the lumen-wall boundary was identifiable on B-mode ultrasound, and the imaged cross-section did not move out of the imaging plane from systole to diastole. After endarterectomy, the plaques were processed using a validated histology analysis technique. Locally elevated strain values were observed in regions containing predominantly components related to plaque vulnerability, whereas lower values were observed in fibrous, collagen-rich plaques. The median strain of the inner plaque layer (1 mm thickness) was significantly higher (P<0.01) for (fibro)atheromatous (n=20, strain=0.27%) than that for fibrous plaques (n=14, strain=-0.75%). Also, a significantly larger area percentage of the inner layer revealed strains above 0.5% for (fibro)atheromatous (45.30%) compared with fibrous plaques (31.59%). (Fibro)atheromatous plaques were detected with a sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 86%, 88%, and 71%, respectively. Strain did not significantly correlate with fibrous cap thickness, smooth muscle cell, or macrophage concentration. Compound ultrasound strain imaging allows differentiating (fibro)atheromatous from fibrous carotid artery plaques. © 2016 American Heart Association, Inc.
Watermarking and copyright labeling of printed images
NASA Astrophysics Data System (ADS)
Hel-Or, Hagit Z.
2001-07-01
Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal
2014-02-24
On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.
Moore, Christopher; Marchant, Thomas
2017-07-12
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Faure, Emmanuel; Savy, Thierry; Rizzi, Barbara; Melani, Camilo; Stašová, Olga; Fabrèges, Dimitri; Špir, Róbert; Hammons, Mark; Čúnderlík, Róbert; Recher, Gaëlle; Lombardot, Benoît; Duloquin, Louise; Colin, Ingrid; Kollár, Jozef; Desnoulez, Sophie; Affaticati, Pierre; Maury, Benoît; Boyreau, Adeline; Nief, Jean-Yves; Calvat, Pascal; Vernier, Philippe; Frain, Monique; Lutfalla, Georges; Kergosien, Yannick; Suret, Pierre; Remešíková, Mariana; Doursat, René; Sarti, Alessandro; Mikula, Karol; Peyriéras, Nadine; Bourgine, Paul
2016-01-01
The quantitative and systematic analysis of embryonic cell dynamics from in vivo 3D+time image data sets is a major challenge at the forefront of developmental biology. Despite recent breakthroughs in the microscopy imaging of living systems, producing an accurate cell lineage tree for any developing organism remains a difficult task. We present here the BioEmergences workflow integrating all reconstruction steps from image acquisition and processing to the interactive visualization of reconstructed data. Original mathematical methods and algorithms underlie image filtering, nucleus centre detection, nucleus and membrane segmentation, and cell tracking. They are demonstrated on zebrafish, ascidian and sea urchin embryos with stained nuclei and membranes. Subsequent validation and annotations are carried out using Mov-IT, a custom-made graphical interface. Compared with eight other software tools, our workflow achieved the best lineage score. Delivered in standalone or web service mode, BioEmergences and Mov-IT offer a unique set of tools for in silico experimental embryology. PMID:26912388
A network-based training environment: a medical image processing paradigm.
Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J
1998-01-01
The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Marchant, Thomas
2017-08-01
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Shoulder Arthroplasty Imaging: What’s New
Gregory, T.M
2017-01-01
Background: Shoulder arthroplasty, in its different forms (hemiarthroplasty, total shoulder arthroplasty and reverse total shoulder arthroplasty) has transformed the clinical outcomes of shoulder disorders. Improvement of general clinical outcome is the result of stronger adequacy of the treatment to the diagnosis, enhanced surgical techniques, specific implanted materials, and more accurate follow up. Imaging is an important tool in each step of these processes. Method: This article is a review article declining recent imaging processes for shoulder arthroplasty. Results: Shoulder imaging is important for shoulder arthroplasty pre-operative planning but also for post-operative monitoring of the prosthesis and this article has a focus on the validity of plain radiographs for detecting radiolucent line and on new Computed Tomography scan method established to eliminate the prosthesis metallic artefacts that obscure the component fixation visualisation. Conclusion: Number of shoulder arthroplasties implanted have grown up rapidly for the past decade, leading to an increase in the number of complications. In parallel, new imaging system have been established to monitor these complications, especially component loosening PMID:29152007
Navigable points estimation for mobile robots using binary image skeletonization
NASA Astrophysics Data System (ADS)
Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman
2017-02-01
This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.
Final Report 2007: DOE-FG02-87ER60561
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilbourn, Michael R
2007-04-26
This project involved a multi-faceted approach to the improvement of techniques used in Positron Emission Tomography (PET), from radiochemistry to image processing and data analysis. New methods for radiochemical syntheses were examined, new radiochemicals prepared for evaluation and eventual use in human PET studies, and new pre-clinical methods examined for validation of biochemical parameters in animal studies. The value of small animal PET imaging in measuring small changes of in vivo biochemistry was examined and directly compared to traditional tissue sampling techniques. In human imaging studies, the ability to perform single experimental sessions utilizing two overlapping injections of radiopharmaceuticals wasmore » tested, and it was shown that valid biochemical measures for both radiotracers can be obtained through careful pharmacokinetic modeling of the PET emission data. Finally, improvements in reconstruction algorithms for PET data from small animal PET scanners was realized and these have been implemented in commercial releases. Together, the project represented an integrated effort to improve and extend all basic science aspects of PET imaging at both the animal and human level.« less
3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading
Cho, Nam-Hoon; Choi, Heung-Kook
2014-01-01
One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701
Upadhyay, Jaymin; Geber, Christian; Hargreaves, Richard; Birklein, Frank; Borsook, David
2018-01-01
Assessing clinical pain and metrics related to function or quality of life predominantly relies on patient reported subjective measures. These outcome measures are generally not applicable to the preclinical setting where early signs pointing to analgesic value of a therapy are sought, thus introducing difficulties in animal to human translation in pain research. Evaluating brain function in patients and respective animal model(s) has the potential to characterize mechanisms associated with pain or pain-related phenotypes and thereby provide a means of laboratory to clinic translation. This review summarizes the progress made towards understanding of brain function in clinical and preclinical pain states elucidated using an imaging approach as well as the current level of validity of translational pain imaging. We hypothesize that neuroimaging can describe the central representation of pain or pain phenotypes and yields a basis for the development and selection of clinically relevant animal assays. This approach may increase the probability of finding meaningful new analgesics that can help satisfy the significant unmet medical needs of patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yuan, Yinyin; Failmezger, Henrik; Rueda, Oscar M; Ali, H Raza; Gräf, Stefan; Chin, Suet-Feung; Schwarz, Roland F; Curtis, Christina; Dunning, Mark J; Bardwell, Helen; Johnson, Nicola; Doyle, Sarah; Turashvili, Gulisa; Provenzano, Elena; Aparicio, Sam; Caldas, Carlos; Markowetz, Florian
2012-10-24
Solid tumors are heterogeneous tissues composed of a mixture of cancer and normal cells, which complicates the interpretation of their molecular profiles. Furthermore, tissue architecture is generally not reflected in molecular assays, rendering this rich information underused. To address these challenges, we developed a computational approach based on standard hematoxylin and eosin-stained tissue sections and demonstrated its power in a discovery and validation cohort of 323 and 241 breast tumors, respectively. To deconvolute cellular heterogeneity and detect subtle genomic aberrations, we introduced an algorithm based on tumor cellularity to increase the comparability of copy number profiles between samples. We next devised a predictor for survival in estrogen receptor-negative breast cancer that integrated both image-based and gene expression analyses and significantly outperformed classifiers that use single data types, such as microarray expression signatures. Image processing also allowed us to describe and validate an independent prognostic factor based on quantitative analysis of spatial patterns between stromal cells, which are not detectable by molecular assays. Our quantitative, image-based method could benefit any large-scale cancer study by refining and complementing molecular assays of tumor samples.
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2003-01-01
Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.
Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David
2013-08-01
A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
Image processing for cryogenic transmission electron microscopy of symmetry-mismatched complexes.
Huiskonen, Juha T
2018-02-08
Cryogenic transmission electron microscopy (cryo-TEM) is a high-resolution biological imaging method, whereby biological samples, such as purified proteins, macromolecular complexes, viral particles, organelles and cells, are embedded in vitreous ice preserving their native structures. Due to sensitivity of biological materials to the electron beam of the microscope, only relatively low electron doses can be applied during imaging. As a result, the signal arising from the structure of interest is overpowered by noise in the images. To increase the signal-to-noise ratio, different image processing-based strategies that aim at coherent averaging of signal have been devised. In such strategies, images are generally assumed to arise from multiple identical copies of the structure. Prior to averaging, the images must be grouped according to the view of the structure they represent and images representing the same view must be simultaneously aligned relatively to each other. For computational reconstruction of the three-dimensional structure, images must contain different views of the original structure. Structures with multiple symmetry-related substructures are advantageous in averaging approaches because each image provides multiple views of the substructures. However, the symmetry assumption may be valid for only parts of the structure, leading to incoherent averaging of the other parts. Several image processing approaches have been adapted to tackle symmetry-mismatched substructures with increasing success. Such structures are ubiquitous in nature and further computational method development is needed to understanding their biological functions. ©2018 The Author(s).
Optical multiple-image hiding based on interference and grating modulation
NASA Astrophysics Data System (ADS)
He, Wenqi; Peng, Xiang; Meng, Xiangfeng
2012-07-01
We present a method for multiple-image hiding on the basis of interference-based encryption architecture and grating modulation. By using a modified phase retrieval algorithm, we can separately hide a number of secret images into one arbitrarily preselected host image associated with a set of phase-only masks (POMs), which are regarded as secret keys. Thereafter, a grating modulation operation is introduced to multiplex and store the different POMs into a single key mask, which is then assigned to the authorized users in privacy. For recovery, after an appropriate demultiplexing process, one can reconstruct the distributions of all the secret keys and then recover the corresponding hidden images with suppressed crosstalk. Computer simulation results are presented to validate the feasibility of our approach.
Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.
2014-01-01
A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367
Geometric registration of remotely sensed data with SAMIR
NASA Astrophysics Data System (ADS)
Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto
2015-06-01
The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.
Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel
2014-10-01
An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.
A Multi-Functional Imaging Approach to High-Content Protein Interaction Screening
Matthews, Daniel R.; Fruhwirth, Gilbert O.; Weitsman, Gregory; Carlin, Leo M.; Ofo, Enyinnaya; Keppler, Melanie; Barber, Paul R.; Tullis, Iain D. C.; Vojnovic, Borivoj; Ng, Tony; Ameer-Beg, Simon M.
2012-01-01
Functional imaging can provide a level of quantification that is not possible in what might be termed traditional high-content screening. This is due to the fact that the current state-of-the-art high-content screening systems take the approach of scaling-up single cell assays, and are therefore based on essentially pictorial measures as assay indicators. Such phenotypic analyses have become extremely sophisticated, advancing screening enormously, but this approach can still be somewhat subjective. We describe the development, and validation, of a prototype high-content screening platform that combines steady-state fluorescence anisotropy imaging with fluorescence lifetime imaging (FLIM). This functional approach allows objective, quantitative screening of small molecule libraries in protein-protein interaction assays. We discuss the development of the instrumentation, the process by which information on fluorescence resonance energy transfer (FRET) can be extracted from wide-field, acceptor fluorescence anisotropy imaging and cross-checking of this modality using lifetime imaging by time-correlated single-photon counting. Imaging of cells expressing protein constructs where eGFP and mRFP1 are linked with amino-acid chains of various lengths (7, 19 and 32 amino acids) shows the two methodologies to be highly correlated. We validate our approach using a small-scale inhibitor screen of a Cdc42 FRET biosensor probe expressed in epidermoid cancer cells (A431) in a 96 microwell-plate format. We also show that acceptor fluorescence anisotropy can be used to measure variations in hetero-FRET in protein-protein interactions. We demonstrate this using a screen of inhibitors of internalization of the transmembrane receptor, CXCR4. These assays enable us to demonstrate all the capabilities of the instrument, image processing and analytical techniques that have been developed. Direct correlation between acceptor anisotropy and donor FLIM is observed for FRET assays, providing an opportunity to rapidly screen proteins, interacting on the nano-meter scale, using wide-field imaging. PMID:22506000
Ahlander, Britt-Marie; Årestedt, Kristofer; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth
2016-06-01
To develop and validate a new instrument measuring patient anxiety during Magnetic Resonance Imaging examinations, Magnetic Resonance Imaging- Anxiety Questionnaire. Questionnaires measuring patients' anxiety during Magnetic Resonance Imaging examinations have been the same as used in a wide range of conditions. To learn about patients' experience during examination and to evaluate interventions, a specific questionnaire measuring patient anxiety during Magnetic Resonance Imaging is needed. Psychometric cross-sectional study with test-retest design. A new questionnaire, Magnetic Resonance Imaging-Anxiety Questionnaire, was designed from patient expressions of anxiety in Magnetic Resonance Imaging-scanners. The sample was recruited between October 2012-October 2014. Factor structure was evaluated with exploratory factor analysis and internal consistency with Cronbach's alpha. Criterion-related validity, known-group validity and test-retest was calculated. Patients referred for Magnetic Resonance Imaging of either the spine or the heart, were invited to participate. The development and validation of Magnetic Resonance Imaging-Anxiety Questionnaire resulted in 15 items consisting of two factors. Cronbach's alpha was found to be high. Magnetic Resonance Imaging-Anxiety Questionnaire correlated higher with instruments measuring anxiety than with depression scales. Known-group validity demonstrated a higher level of anxiety for patients undergoing Magnetic Resonance Imaging scan of the heart than for those examining the spine. Test-retest reliability demonstrated acceptable level for the scale. Magnetic Resonance Imaging-Anxiety Questionnaire bridges a gap among existing questionnaires, making it a simple and useful tool for measuring patient anxiety during Magnetic Resonance Imaging examinations. © 2016 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun
2018-05-01
In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.
NASA Astrophysics Data System (ADS)
Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko
2015-11-01
To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.
High-Definition Infrared Spectroscopic Imaging
Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew V.; Carney, P. Scott; Bhargava, Rohit
2013-01-01
The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments. PMID:23317676
High-definition infrared spectroscopic imaging.
Reddy, Rohith K; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit
2013-01-01
The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments.
Development of imaging biomarkers and generation of big data.
Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis
2017-06-01
Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Two-dimensional thermography image retrieval from zig-zag scanned data with TZ-SCAN
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Yamasaki, Ryohei; Arai, Kohei
2008-10-01
TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
A physiology-based parametric imaging method for FDG-PET data
NASA Astrophysics Data System (ADS)
Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele
2017-12-01
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
Musculoskeletal ultrasound and other imaging modalities in rheumatoid arthritis.
Ohrndorf, Sarah; Werner, Stephanie G; Finzel, Stephanie; Backhaus, Marina
2013-05-01
This review refers to the use of musculoskeletal ultrasound in patients with rheumatoid arthritis (RA) both in clinical practice and research. Furthermore, other novel sensitive imaging modalities (high resolution peripheral quantitative computed tomography and fluorescence optical imaging) are introduced in this article. Recently published ultrasound studies presented power Doppler activity by ultrasound highly predictive for later radiographic erosions in patients with RA. Another study presented synovitis detected by ultrasound being predictive of subsequent structural radiographic destruction irrespective of the ultrasound modality (grayscale ultrasound/power Doppler ultrasound). Further studies are currently under way which prove ultrasound findings as imaging biomarkers in the destructive process of RA. Other introduced novel imaging modalities are in the validation process to prove their impact and significance in inflammatory joint diseases. The introduced imaging modalities show different sensitivities and specificities as well as strength and weakness belonging to the assessment of inflammation, differentiation of the involved structures and radiological progression. The review tries to give an answer regarding how to best integrate them into daily clinical practice with the aim to improve the diagnostic algorithms, the daily patient care and, furthermore, the disease's outcome.
Deblurring adaptive optics retinal images using deep convolutional neural networks.
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-12-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.
Deblurring adaptive optics retinal images using deep convolutional neural networks
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-01-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496
The ground prototype processor: Level-1 production during Sentinel-2 in-orbit acceptance
NASA Astrophysics Data System (ADS)
Petrucci, B.; Dechoz, C.; Lachérade, S.; L'Helguen, C.; Raynaud, J.-L.; Trémas, T.; Picard, C.; Rolland, A.
2015-10-01
Jointly with the European Commission, the Sentinel-2 earth observation optical mission is developed by the European Space Agency (ESA). Relying on a constellation of satellites put in orbit starting mid-2015, Sentinel-2 will be devoted to the monitoring of land and coastal areas worldwide thanks to an imagery at high revisit (5 days with two satellites), high resolution (10m, 20m and 60m) with large swath (290km), and multi-spectral imagery (13 bands in visible and shortwave infra-red). In this framework, the French Space Agency (CNES: Centre National d'Etudes Spatiales) supports ESA on the activities related to Image Quality, defining the image products and prototyping the processing techniques. Scope of this paper is to present the Ground Prototype Processor (GPP) that will be in charge of Level-1 production during Sentinel-2 In Orbit Acceptance phase. GPP has been developed by a European industrial consortium composed of Advanced Computer Systems (ACS), Magellium and DLR on the basis of CNES technical specification of Sentinel-2 data processing and under the joint management of ESA-ESTEC and CNES. It will assure the generation of the products used for Calibration and Validation activities and it will provide the reference data for Sentinel-2 Payload Data Ground Segment Validation. At first, Sentinel-2 end-users products definition is recalled with the associated radiometric and geometric performances; secondly the methods implemented will be presented with an overview of the Ground Image Processing Parameters that need to be tuned during the In Orbit Acceptance phase to assure the required performance of the products. Finally, the complexity of the processing having been showed, the challenges of the production in terms of data volume and processing time will be highlighted. The first Sentinel-2 Level-1 products are shown.
Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.
Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination.
Spigulis, Janis; Oshina, Ilze; Berzina, Anna; Bykov, Alexander
2017-09-01
Chromophore distribution maps are useful tools for skin malformation severity assessment and for monitoring of skin recovery after burns, surgeries, and other interactions. The chromophore maps can be obtained by processing several spectral images of skin, e.g., captured by hyperspectral or multispectral cameras during seconds or even minutes. To avoid motion artifacts and simplify the procedure, a single-snapshot technique for mapping melanin, oxyhemoglobin, and deoxyhemoglobin of in-vivo skin by a smartphone under simultaneous three-wavelength (448–532–659 nm) laser illumination is proposed and examined. Three monochromatic spectral images related to the illumination wavelengths were extracted from the smartphone camera RGB image data set with respect to crosstalk between the RGB detection bands. Spectral images were further processed accordingly to Beer’s law in a three chromophore approximation. Photon absorption path lengths in skin at the exploited wavelengths were estimated by means of Monte Carlo simulations. The technique was validated clinically on three kinds of skin lesions: nevi, hemangiomas, and seborrheic keratosis. Design of the developed add-on laser illumination system, image-processing details, and the results of clinical measurements are presented and discussed.
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.
Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot
2004-12-01
The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.
Validation and detection of vessel landmarks by using anatomical knowledge
NASA Astrophysics Data System (ADS)
Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger
2010-03-01
The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.
NASA Astrophysics Data System (ADS)
Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa
2018-03-01
In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.
Dragomir-Daescu, Dan; Buijs, Jorn Op Den; McEligot, Sean; Dai, Yifei; Entwistle, Rachel C.; Salas, Christina; Melton, L. Joseph; Bennet, Kevin E.; Khosla, Sundeep; Amin, Shreyasee
2013-01-01
Clinical implementation of quantitative computed tomography-based finite element analysis (QCT/FEA) of proximal femur stiffness and strength to assess the likelihood of proximal femur (hip) fractures requires a unified modeling procedure, consistency in predicting bone mechanical properties, and validation with realistic test data that represent typical hip fractures, specifically, a sideways fall on the hip. We, therefore, used two sets (n = 9, each) of cadaveric femora with bone densities varying from normal to osteoporotic to build, refine, and validate a new class of QCT/FEA models for hip fracture under loading conditions that simulate a sideways fall on the hip. Convergence requirements of finite element models of the first set of femora led to the creation of a new meshing strategy and a robust process to model proximal femur geometry and material properties from QCT images. We used a second set of femora to cross-validate the model parameters derived from the first set. Refined models were validated experimentally by fracturing femora using specially designed fixtures, load cells, and high speed video capture. CT image reconstructions of fractured femora were created to classify the fractures. The predicted stiffness (cross-validation R2 = 0.87), fracture load (cross-validation R2 = 0.85), and fracture patterns (83% agreement) correlated well with experimental data. PMID:21052839
Lewiss, Resa E; Chan, Wilma; Sheng, Alexander Y; Soto, Jorge; Castro, Alexandra; Meltzer, Andrew C; Cherney, Alan; Kumaravel, Manickam; Cody, Dianna; Chen, Esther H
2015-12-01
The appropriate selection and accurate interpretation of diagnostic imaging is a crucial skill for emergency practitioners. To date, the majority of the published literature and research on competency assessment comes from the subspecialty of point-of-care ultrasound. A group of radiologists, physicists, and emergency physicians convened at the 2015 Academic Emergency Medicine consensus conference to discuss and prioritize a research agenda related to education, assessment, and competency in ordering and interpreting diagnostic imaging. A set of questions for the continued development of an educational curriculum on diagnostic imaging for trainees and competency assessment using specific assessment methods based on current best practices was delineated. The research priorities were developed through an iterative consensus-driven process using a modified nominal group technique that culminated in an in-person breakout session. The four recommendations are: 1) develop a diagnostic imaging curriculum for emergency medicine (EM) residency training; 2) develop, study, and validate tools to assess competency in diagnostic imaging interpretation; 3) evaluate the role of simulation in education, assessment, and competency measures for diagnostic imaging; 4) study is needed regarding the American College of Radiology Appropriateness Criteria, an evidence-based peer-reviewed resource in determining the use of diagnostic imaging, to maximize its value in EM. In this article, the authors review the supporting reliability and validity evidence and make specific recommendations for future research on the education, competency, and assessment of learning diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Thermographic Imaging of the Space Shuttle During Re-Entry Using a Near Infrared Sensor
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.;
2012-01-01
High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter s hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA s next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness. Keywords: HYTHIRM, Space Shuttle thermography, hypersonic imaging, near infrared imaging, histogram analysis, singular value decomposition, eigenvalue image sharpness
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian
2018-02-01
This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.
Using deep learning for detecting gender in adult chest radiographs
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2018-03-01
In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.
Validation of "AW3D" Global Dsm Generated from Alos Prism
NASA Astrophysics Data System (ADS)
Takaku, Junichi; Tadono, Takeo; Tsutsui, Ken; Ichikawa, Mayumi
2016-06-01
Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried by Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. It has an exclusive ability to perform a triplet stereo observation which views forward, nadir, and backward along the satellite track in 2.5 m ground resolution, and collected its derived images all over the world during the mission life of the satellite from 2006 through 2011. A new project, which generates global elevation datasets with the image archives, was started in 2014. The data is processed in unprecedented 5 m grid spacing utilizing the original triplet stereo images in 2.5 m resolution. As the number of processed data is growing steadily so that the global land areas are almost covered, a trend of global data qualities became apparent. This paper reports on up-to-date results of the validations for the accuracy of data products as well as the status of data coverage in global areas. The accuracies and error characteristics of datasets are analyzed by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data, as well as ground control points (GCPs) and the reference Digital Elevation Model (DEM) derived from the airborne Light Detection and Ranging (LiDAR).
Radiometric and Geometric Accuracy Analysis of Rasat Pan Imagery
NASA Astrophysics Data System (ADS)
Kocaman, S.; Yalcin, I.; Guler, M.
2016-06-01
RASAT is the second Turkish Earth Observation satellite which was launched in 2011. It operates with pushbroom principle and acquires panchromatic and MS images with 7.5 m and 15 m resolutions, respectively. The swath width of the sensor is 30 km. The main aim of this study is to analyse the radiometric and geometric quality of RASAT images. A systematic validation approach for the RASAT imagery and its products is being applied. RASAT image pair acquired over Kesan city in Edirne province of Turkey are used for the investigations. The raw RASAT data (L0) are processed by Turkish Space Agency (TUBITAK-UZAY) to produce higher level image products. The image products include radiometrically processed (L1), georeferenced (L2) and orthorectified (L3) data, as well as pansharpened images. The image quality assessments include visual inspections, noise, MTF and histogram analyses. The geometric accuracy assessment results are only preliminary and the assessment is performed using the raw images. The geometric accuracy potential is investigated using 3D ground control points extracted from road intersections, which were measured manually in stereo from aerial images with 20 cm resolution and accuracy. The initial results of the study, which were performed using one RASAT panchromatic image pair, are presented in this paper.
Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A
2017-02-11
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2017-03-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.
Developing an ultrasound correlation velocimetry system
NASA Astrophysics Data System (ADS)
Surup, Gerrit; White, Christopher; UNH Team
2011-11-01
The process of building an ultrasound correlation velocimetry (UCV) system by integrating a commercial medical ultrasound with a PC running commercial PIV software is described and preliminary validation measurements in pipe flow using UCV and optical particle image velocimetry (PIV) are reported. In principles of operation, UCV is similar to the technique of PIV, differing only in the image acquisition process. The benefits of UCV are that it does not require optical access to the flow field and can be used for measuring flows of opaque fluids. While the limitations of UVC are the inherently low frame rates (limited by the imaging capabilities of the commercial ultrasound system) and low spatial resolution, which limits the range of velocities and transient flow behavior that can be measured. The support of the NSF (CBET0846359, grant monitor Horst Henning Winter) is gratefully acknowledged.
Standardization efforts of digital pathology in Europe.
Rojo, Marcial García; Daniel, Christel; Schrader, Thomas
2012-01-01
EURO-TELEPATH is a European COST Action IC0604. It started in 2007 and will end in November 2011. Its main objectives are evaluating and validating the common technological framework and communication standards required to access, transmit, and manage digital medical records by pathologists and other medical specialties in a networked environment. Working Group 1, "Business Modelling in Pathology," has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy - using Business Process Modelling Notation (BPMN). Working Group 2 has been dedicated to promoting the application of informatics standards in pathology, collaborating with Integrating Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Health terminology standardization research has become a topic of great interest. Future research work should focus on standardizing automatic image analysis and tissue microarrays imaging.
A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment
NASA Astrophysics Data System (ADS)
Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.
2018-05-01
In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio
2007-01-01
The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage the activity), and a discussion of issues associated with the plan's scientific focus.
In flight image processing on multi-rotor aircraft for autonomous landing
NASA Astrophysics Data System (ADS)
Henry, Richard, Jr.
An estimated $6.4 billion was spent during the year 2013 on developing drone technology around the world and is expected to double in the next decade. However, drone applications typically require strong pilot skills, safety, responsibilities and adherence to regulations during flight. If the flight control process could be safer and more reliable in terms of landing, it would be possible to further develop a wider range of applications. The objective of this research effort is to describe the design and evaluation of a fully autonomous Unmanned Aerial system (UAS), specifically a four rotor aircraft, commonly known as quad copter for precise landing applications. The full landing autonomy is achieved by image processing capabilities during flight for target recognition by employing the open source library OpenCV. In addition, all imaging data is processed by a single embedded computer that estimates a relative position with respect to the target landing pad. Results shows a reduction on the average offset error by 67.88% in comparison to the current return to lunch (RTL) method which only relies on GPS positioning. The present work validates the need for relying on image processing for precise landing applications instead of the inexact method of a commercial low cost GPS dependency.
Optimization of spectral bands for hyperspectral remote sensing of forest vegetation
NASA Astrophysics Data System (ADS)
Dmitriev, Egor V.; Kozoderov, Vladimir V.
2013-10-01
Optimization principles of accounting for the most informative spectral channels in hyperspectral remote sensing data processing serve to enhance the efficiency of the employed high-productive computers. The problem of pattern recognition of the remotely sensed land surface objects with the accent on the forests is outlined from the point of view of the spectral channels optimization on the processed hyperspectral images. The relevant computational procedures are tested using the images obtained by the produced in Russia hyperspectral camera that was installed on a gyro-stabilized platform to conduct the airborne flight campaigns. The Bayesian classifier is used for the pattern recognition of the forests with different tree species and age. The probabilistically optimal algorithm constructed on the basis of the maximum likelihood principle is described to minimize the probability of misclassification given by this classifier. The classification error is the major category to estimate the accuracy of the applied algorithm by the known holdout cross-validation method. Details of the related techniques are presented. Results are shown of selecting the spectral channels of the camera while processing the images having in mind radiometric distortions that diminish the classification accuracy. The spectral channels are selected of the obtained subclasses extracted from the proposed validation techniques and the confusion matrices are constructed that characterize the age composition of the classified pine species as well as the broad age-class recognition for the pine and birch species with the fully illuminated parts of their crowns.
COFFMAN, MARIKA C.; TRUBANOVA, ANDREA; RICHEY, J. ANTHONY; WHITE, SUSAN W.; KIM-SPOON, JUNGMEEN; OLLENDICK, THOMAS H.; PINE, DANIEL S.
2016-01-01
Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. PMID:26359940
Antibodies and antimatter: the resurgence of immuno-PET.
Wu, Anna M
2009-01-01
The completion of the human genome, coupled with parallel major research efforts in proteomics and systems biology, has led to a flood of information on the roles of individual genes and proteins in normal physiologic processes and their disruptions in disease. In practical terms, this information has opened the door to increasingly targeted therapies as specific molecular markers are identified and validated. The ongoing transition from empiric to molecular medicine has engendered a need for corresponding molecular diagnostics, including noninvasive molecular imaging. Convergence of knowledge regarding key biomarkers that define normal biologic processes and disease with protein and imaging technology makes this an opportune time to revisit the combination of antibodies and PET, or immuno-PET.
A design of camera simulator for photoelectric image acquisition system
NASA Astrophysics Data System (ADS)
Cai, Guanghui; Liu, Wen; Zhang, Xin
2015-02-01
In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.
Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms.
Coppini, Giuseppe; Diciotti, Stefano; Falchini, Massimo; Villari, Natale; Valli, Guido
2003-12-01
The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings.
Microstructural analysis of aluminum high pressure die castings
NASA Astrophysics Data System (ADS)
David, Maria Diana
Microstructural analysis of aluminum high pressure die castings (HPDC) is challenging and time consuming. Automating the stereology method is an efficient way in obtaining quantitative data; however, validating the accuracy of this technique can also pose some challenges. In this research, a semi-automated algorithm to quantify microstructural features in aluminum HPDC was developed. Analysis was done near the casting surface where it exhibited fine microstructure. Optical and Secondary electron (SE) and backscatter electron (BSE) SEM images were taken to characterize the features in the casting. Image processing steps applied on SEM and optical micrographs included median and range filters, dilation, erosion, and a hole-closing function. Measurements were done on different image pixel resolutions that ranged from 3 to 35 pixel/μm. Pixel resolutions below 6 px/μm were too low for the algorithm to distinguish the phases from each other. At resolutions higher than 6 px/μm, the volume fraction of primary α-Al and the line intercept count curves plateaued. Within this range, comparable results were obtained validating the assumption that there is a range of image pixel resolution relative to the size of the casting features at which stereology measurements become independent of the image resolution. Volume fraction within this curve plateau was consistent with the manual measurements while the line intercept count was significantly higher using the computerized technique for all resolutions. This was attributed to the ragged edges of some primary α-Al; hence, the algorithm still needs some improvements. Further validation of the code using other castings or alloys with known phase amount and size may also be beneficial.
Rapid Corner Detection Using FPGAs
NASA Technical Reports Server (NTRS)
Morfopoulos, Arin C.; Metz, Brandon C.
2010-01-01
In order to perform precision landings for space missions, a control system must be accurate to within ten meters. Feature detection applied against images taken during descent and correlated against the provided base image is computationally expensive and requires tens of seconds of processing time to do just one image while the goal is to process multiple images per second. To solve this problem, this algorithm takes that processing load from the central processing unit (CPU) and gives it to a reconfigurable field programmable gate array (FPGA), which is able to compute data in parallel at very high clock speeds. The workload of the processor then becomes simpler; to read an image from a camera, it is transferred into the FPGA, and the results are read back from the FPGA. The Harris Corner Detector uses the determinant and trace to find a corner score, with each step of the computation occurring on independent clock cycles. Essentially, the image is converted into an x and y derivative map. Once three lines of pixel information have been queued up, valid pixel derivatives are clocked into the product and averaging phase of the pipeline. Each x and y derivative is squared against itself, as well as the product of the ix and iy derivative, and each value is stored in a WxN size buffer, where W represents the size of the integration window and N is the width of the image. In this particular case, a window size of 5 was chosen, and the image is 640 480. Over a WxN size window, an equidistance Gaussian is applied (to bring out the stronger corners), and then each value in the entire window is summed and stored. The required components of the equation are in place, and it is just a matter of taking the determinant and trace. It should be noted that the trace is being weighted by a constant k, a value that is found empirically to be within 0.04 to 0.15 (and in this implementation is 0.05). The constant k determines the number of corners available to be compared against a threshold sigma to mark a valid corner. After a fixed delay from when the first pixel is clocked in (to fill the pipeline), a score is achieved after each successive clock. This score corresponds with an (x,y) location within the image. If the score is higher than the predetermined threshold sigma, then a flag is set high and the location is recorded.
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
Translational MR Neuroimaging of Stroke and Recovery
Mandeville, Emiri T.; Ayata, Cenk; Zheng, Yi; Mandeville, Joseph B.
2016-01-01
Multiparametric magnetic resonance imaging (MRI) has become a critical clinical tool for diagnosing focal ischemic stroke severity, staging treatment, and predicting outcome. Imaging during the acute phase focuses on tissue viability in the stroke vicinity, while imaging during recovery requires the evaluation of distributed structural and functional connectivity. Preclinical MRI of experimental stroke models provides validation of non-invasive biomarkers in terms of cellular and molecular mechanisms, while also providing a translational platform for evaluation of prospective therapies. This brief review of translational stroke imaging discusses the acute to chronic imaging transition, the principles underlying common MRI methods employed in stroke research, and experimental results obtained by clinical and preclinical imaging to determine tissue viability, vascular remodeling, structural connectivity of major white matter tracts, and functional connectivity using task-based and resting-state fMRI during the stroke recovery process. PMID:27578048
Artificial intelligence for geologic mapping with imaging spectrometers
NASA Technical Reports Server (NTRS)
Kruse, F. A.
1993-01-01
This project was a three year study at the Center for the Study of Earth from Space (CSES) within the Cooperative Institute for Research in Environmental Science (CIRES) at the University of Colorado, Boulder. The goal of this research was to develop an expert system to allow automated identification of geologic materials based on their spectral characteristics in imaging spectrometer data such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This requirement was dictated by the volume of data produced by imaging spectrometers, which prohibits manual analysis. The research described is based on the development of automated techniques for analysis of imaging spectrometer data that emulate the analytical processes used by a human observer. The research tested the feasibility of such an approach, implemented an operational system, and tested the validity of the results for selected imaging spectrometer data sets.
TH-A-207B-00: Shear-Wave Imaging and a QIBA US Biomarker Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Imaging of tissue elastic properties is a relatively new and powerful approach to one of the oldest and most important diagnostic tools. Imaging of shear wave speed with ultrasound is has been added to most high-end ultrasound systems. Understanding this exciting imaging mode aiding its most effective use in medicine can be a rewarding effort for medical physicists and other medical imaging and treatment professionals. Assuring consistent, quantitative measurements across the many ultrasound systems in a typical imaging department will constitute a major step toward realizing the great potential of this technique and other quantitative imaging. This session will targetmore » these two goals with two presentations. A. Basics and Current Implementations of Ultrasound Imaging of Shear Wave Speed and Elasticity - Shigao Chen, Ph.D. Learning objectives-To understand: Introduction: Importance of tissue elasticity measurement Strain vs. shear wave elastography (SWE), beneficial features of SWE The link between shear wave speed and material properties, influence of viscosity Generation of shear waves External vibration (Fibroscan) ultrasound radiation force Point push Supersonic push (Aixplorer) Comb push (GE Logiq E9) Detection of shear waves Motion detection from pulse-echo ultrasound Importance of frame rate for shear wave imaging Plane wave imaging detection How to achieve high effective frame rate using line-by-line scanners Shear wave speed calculation Time to peak Random sample consensus (RANSAC) Cross correlation Sources of bias and variation in SWE Tissue viscosity Transducer compression or internal pressure of organ Reflection of shear waves at boundaries B. Elasticity Imaging System Biomarker Qualification and User Testing of Systems – Brian Garra, M.D. Learning objectives-To understand: Goals Review the need for quantitative medical imaging Provide examples of quantitative imaging biomarkers Acquaint the participant with the purpose of the RSNA Quantitative Imaging Biomarker Alliance and the need for such an organization Review the QIBA process for creating a quantitative biomarker Summarize steps needed to verify adherence of site, operators, and imaging systems to a QIBA profile Underlying Premise and Assumptions Objective, quantifiable results are needed to enhance the value of diagnostic imaging in clinical practice Reasons for quantification Evidence based medicine requires objective, not subjective observer data Computerized decision support tools (eg CAD) generally require quantitative input. Quantitative, reproducible measures are more easily used to develop personalized molecular medical diagnostic and treatment systems What is quantitative imaging? Definition from Imaging Metrology Workshop The Quantitative Imaging Biomarker Alliance Formation 2008 Mission Structure Example Imaging Biomarkers Being Explored Biomarker Selection Groundwork Draft Protocol for imaging and data evaluation QIBA Profile Drafting Equipment and Site Validation Technical Clinical Site and Equipment QA and Compliance Checking Ultrasound Elasticity Estimation Biomarker US Elasticity Estimation Background Current Status and Problems Biomarker Selection-process and outcome US SWS for Liver Fibrosis Biomarker Work Groundwork Literature search and analysis results Phase I phantom testing-Elastic phantoms Phase II phantom testing-Viscoelastic phantoms Digital Simulated Data Protocol and Profile Drafting Protocol: based on UPICT and existing literature and standards bodies protocols Profile-Current claims, Manufacturer specific appendices What comes after the profile Profile Validation Technical validation Clinical validation QA and Compliance Possible approaches Site Operator testing Site protocol re-evaluation Imaging system Manufacturer testing and attestation User acceptance testing and periodic QA Phantom Tests Digital Phantom Based Testing Standard QA Testing Remediation Schemes Profile Evolution Towards additional applications Towards higher accuracy and precision Supported in part by NIH contract HHSN268201300071C from NIBIB. Collaboration with GE Global Research, no personal support.; S. Chen, Some technologies described in this presentation have been licensed. Mayo Clinic and Dr. Chen have financial interests these technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S.
Imaging of tissue elastic properties is a relatively new and powerful approach to one of the oldest and most important diagnostic tools. Imaging of shear wave speed with ultrasound is has been added to most high-end ultrasound systems. Understanding this exciting imaging mode aiding its most effective use in medicine can be a rewarding effort for medical physicists and other medical imaging and treatment professionals. Assuring consistent, quantitative measurements across the many ultrasound systems in a typical imaging department will constitute a major step toward realizing the great potential of this technique and other quantitative imaging. This session will targetmore » these two goals with two presentations. A. Basics and Current Implementations of Ultrasound Imaging of Shear Wave Speed and Elasticity - Shigao Chen, Ph.D. Learning objectives-To understand: Introduction: Importance of tissue elasticity measurement Strain vs. shear wave elastography (SWE), beneficial features of SWE The link between shear wave speed and material properties, influence of viscosity Generation of shear waves External vibration (Fibroscan) ultrasound radiation force Point push Supersonic push (Aixplorer) Comb push (GE Logiq E9) Detection of shear waves Motion detection from pulse-echo ultrasound Importance of frame rate for shear wave imaging Plane wave imaging detection How to achieve high effective frame rate using line-by-line scanners Shear wave speed calculation Time to peak Random sample consensus (RANSAC) Cross correlation Sources of bias and variation in SWE Tissue viscosity Transducer compression or internal pressure of organ Reflection of shear waves at boundaries B. Elasticity Imaging System Biomarker Qualification and User Testing of Systems – Brian Garra, M.D. Learning objectives-To understand: Goals Review the need for quantitative medical imaging Provide examples of quantitative imaging biomarkers Acquaint the participant with the purpose of the RSNA Quantitative Imaging Biomarker Alliance and the need for such an organization Review the QIBA process for creating a quantitative biomarker Summarize steps needed to verify adherence of site, operators, and imaging systems to a QIBA profile Underlying Premise and Assumptions Objective, quantifiable results are needed to enhance the value of diagnostic imaging in clinical practice Reasons for quantification Evidence based medicine requires objective, not subjective observer data Computerized decision support tools (eg CAD) generally require quantitative input. Quantitative, reproducible measures are more easily used to develop personalized molecular medical diagnostic and treatment systems What is quantitative imaging? Definition from Imaging Metrology Workshop The Quantitative Imaging Biomarker Alliance Formation 2008 Mission Structure Example Imaging Biomarkers Being Explored Biomarker Selection Groundwork Draft Protocol for imaging and data evaluation QIBA Profile Drafting Equipment and Site Validation Technical Clinical Site and Equipment QA and Compliance Checking Ultrasound Elasticity Estimation Biomarker US Elasticity Estimation Background Current Status and Problems Biomarker Selection-process and outcome US SWS for Liver Fibrosis Biomarker Work Groundwork Literature search and analysis results Phase I phantom testing-Elastic phantoms Phase II phantom testing-Viscoelastic phantoms Digital Simulated Data Protocol and Profile Drafting Protocol: based on UPICT and existing literature and standards bodies protocols Profile-Current claims, Manufacturer specific appendices What comes after the profile Profile Validation Technical validation Clinical validation QA and Compliance Possible approaches Site Operator testing Site protocol re-evaluation Imaging system Manufacturer testing and attestation User acceptance testing and periodic QA Phantom Tests Digital Phantom Based Testing Standard QA Testing Remediation Schemes Profile Evolution Towards additional applications Towards higher accuracy and precision Supported in part by NIH contract HHSN268201300071C from NIBIB. Collaboration with GE Global Research, no personal support.; S. Chen, Some technologies described in this presentation have been licensed. Mayo Clinic and Dr. Chen have financial interests these technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garra, B.
Imaging of tissue elastic properties is a relatively new and powerful approach to one of the oldest and most important diagnostic tools. Imaging of shear wave speed with ultrasound is has been added to most high-end ultrasound systems. Understanding this exciting imaging mode aiding its most effective use in medicine can be a rewarding effort for medical physicists and other medical imaging and treatment professionals. Assuring consistent, quantitative measurements across the many ultrasound systems in a typical imaging department will constitute a major step toward realizing the great potential of this technique and other quantitative imaging. This session will targetmore » these two goals with two presentations. A. Basics and Current Implementations of Ultrasound Imaging of Shear Wave Speed and Elasticity - Shigao Chen, Ph.D. Learning objectives-To understand: Introduction: Importance of tissue elasticity measurement Strain vs. shear wave elastography (SWE), beneficial features of SWE The link between shear wave speed and material properties, influence of viscosity Generation of shear waves External vibration (Fibroscan) ultrasound radiation force Point push Supersonic push (Aixplorer) Comb push (GE Logiq E9) Detection of shear waves Motion detection from pulse-echo ultrasound Importance of frame rate for shear wave imaging Plane wave imaging detection How to achieve high effective frame rate using line-by-line scanners Shear wave speed calculation Time to peak Random sample consensus (RANSAC) Cross correlation Sources of bias and variation in SWE Tissue viscosity Transducer compression or internal pressure of organ Reflection of shear waves at boundaries B. Elasticity Imaging System Biomarker Qualification and User Testing of Systems – Brian Garra, M.D. Learning objectives-To understand: Goals Review the need for quantitative medical imaging Provide examples of quantitative imaging biomarkers Acquaint the participant with the purpose of the RSNA Quantitative Imaging Biomarker Alliance and the need for such an organization Review the QIBA process for creating a quantitative biomarker Summarize steps needed to verify adherence of site, operators, and imaging systems to a QIBA profile Underlying Premise and Assumptions Objective, quantifiable results are needed to enhance the value of diagnostic imaging in clinical practice Reasons for quantification Evidence based medicine requires objective, not subjective observer data Computerized decision support tools (eg CAD) generally require quantitative input. Quantitative, reproducible measures are more easily used to develop personalized molecular medical diagnostic and treatment systems What is quantitative imaging? Definition from Imaging Metrology Workshop The Quantitative Imaging Biomarker Alliance Formation 2008 Mission Structure Example Imaging Biomarkers Being Explored Biomarker Selection Groundwork Draft Protocol for imaging and data evaluation QIBA Profile Drafting Equipment and Site Validation Technical Clinical Site and Equipment QA and Compliance Checking Ultrasound Elasticity Estimation Biomarker US Elasticity Estimation Background Current Status and Problems Biomarker Selection-process and outcome US SWS for Liver Fibrosis Biomarker Work Groundwork Literature search and analysis results Phase I phantom testing-Elastic phantoms Phase II phantom testing-Viscoelastic phantoms Digital Simulated Data Protocol and Profile Drafting Protocol: based on UPICT and existing literature and standards bodies protocols Profile-Current claims, Manufacturer specific appendices What comes after the profile Profile Validation Technical validation Clinical validation QA and Compliance Possible approaches Site Operator testing Site protocol re-evaluation Imaging system Manufacturer testing and attestation User acceptance testing and periodic QA Phantom Tests Digital Phantom Based Testing Standard QA Testing Remediation Schemes Profile Evolution Towards additional applications Towards higher accuracy and precision Supported in part by NIH contract HHSN268201300071C from NIBIB. Collaboration with GE Global Research, no personal support.; S. Chen, Some technologies described in this presentation have been licensed. Mayo Clinic and Dr. Chen have financial interests these technologies.« less
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
NASA Astrophysics Data System (ADS)
van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-02-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns.
Van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-01-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns. PMID:28220842
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykac, Deniz; Chaum, Edward; Fox, Karen
A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less
Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim
2014-01-01
Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.
Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software
NASA Astrophysics Data System (ADS)
Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul
2003-12-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Validation of Clay Modeling as a Learning Tool for the Periventricular Structures of the Human Brain
ERIC Educational Resources Information Center
Akle, Veronica; Peña-Silva, Ricardo A.; Valencia, Diego M.; Rincón-Perez, Carlos W.
2018-01-01
Visualizing anatomical structures and functional processes in three dimensions (3D) are important skills for medical students. However, contemplating 3D structures mentally and interpreting biomedical images can be challenging. This study examines the impact of a new pedagogical approach to teaching neuroanatomy, specifically how building a…
An Introduction to Normalization and Calibration Methods in Functional MRI
ERIC Educational Resources Information Center
Liu, Thomas T.; Glover, Gary H.; Mueller, Bryon A.; Greve, Douglas N.; Brown, Gregory G.
2013-01-01
In functional magnetic resonance imaging (fMRI), the blood oxygenation level dependent (BOLD) signal is often interpreted as a measure of neural activity. However, because the BOLD signal reflects the complex interplay of neural, vascular, and metabolic processes, such an interpretation is not always valid. There is growing evidence that changes…
Satellite on-board real-time SAR processor prototype
NASA Astrophysics Data System (ADS)
Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François
2017-11-01
A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.
A spectral water index based on visual bands
NASA Astrophysics Data System (ADS)
Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed
2013-10-01
Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Chen, Liang; Carlton Jones, Anoma Lalani; Mair, Grant; Patel, Rajiv; Gontsarova, Anastasia; Ganesalingam, Jeban; Math, Nikhil; Dawson, Angela; Aweid, Basaam; Cohen, David; Mehta, Amrish; Wardlaw, Joanna; Rueckert, Daniel; Bentley, Paul
2018-05-15
Purpose To validate a random forest method for segmenting cerebral white matter lesions (WMLs) on computed tomographic (CT) images in a multicenter cohort of patients with acute ischemic stroke, by comparison with fluid-attenuated recovery (FLAIR) magnetic resonance (MR) images and expert consensus. Materials and Methods A retrospective sample of 1082 acute ischemic stroke cases was obtained that was composed of unselected patients who were treated with thrombolysis or who were undergoing contemporaneous MR imaging and CT, and a subset of International Stroke Thrombolysis-3 trial participants. Automated delineations of WML on images were validated relative to experts' manual tracings on CT images, and co-registered FLAIR MR imaging, and ratings were performed by using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between automated and expert ratings. Results Automated WML volumes correlated strongly with expert-delineated WML volumes at MR imaging and CT (r 2 = 0.85 and 0.71 respectively; P < .001). Spatial-similarity of automated maps, relative to WML MR imaging, was not significantly different to that of expert WML tracings on CT images. Individual expert WML volumes at CT correlated well with each other (r 2 = 0.85), but varied widely (range, 91% of mean estimate; median estimate, 11 mL; range of estimated ranges, 0.2-68 mL). Agreements (κ) between automated ratings and consensus ratings were 0.60 (Wahlund system) and 0.64 (van Swieten system) compared with agreements between individual pairs of experts of 0.51 and 0.67, respectively, for the two rating systems (P < .01 for Wahlund system comparison of agreements). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (P > .05). Automated preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total automated processing time averaged 109 seconds (range, 79-140 seconds). Conclusion An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts. © RSNA, 2018 Online supplemental material is available for this article.
Validation of the Gatortail method for accurate sizing of pulmonary vessels from 3D medical images.
O'Dell, Walter G; Gormaley, Anne K; Prida, David A
2017-12-01
Detailed characterization of changes in vessel size is crucial for the diagnosis and management of a variety of vascular diseases. Because clinical measurement of vessel size is typically dependent on the radiologist's subjective interpretation of the vessel borders, it is often prone to high inter- and intra-user variability. Automatic methods of vessel sizing have been developed for two-dimensional images but a fully three-dimensional (3D) method suitable for vessel sizing from volumetric X-ray computed tomography (CT) or magnetic resonance imaging has heretofore not been demonstrated and validated robustly. In this paper, we refined and objectively validated Gatortail, a method that creates a mathematical geometric 3D model of each branch in a vascular tree, simulates the appearance of the virtual vascular tree in a 3D CT image, and uses the similarity of the simulated image to a patient's CT scan to drive the optimization of the model parameters, including vessel size, to match that of the patient. The method was validated with a 2-dimensional virtual tree structure under deformation, and with a realistic 3D-printed vascular phantom in which the diameter of 64 branches were manually measured 3 times each. The phantom was then scanned on a conventional clinical CT imaging system and the images processed with the in-house software to automatically segment and mathematically model the vascular tree, label each branch, and perform the Gatortail optimization of branch size and trajectory. Previously proposed methods of vessel sizing using matched Gaussian filters and tubularity metrics were also tested. The Gatortail method was then demonstrated on the pulmonary arterial tree segmented from a human volunteer's CT scan. The standard deviation of the difference between the manually measured and Gatortail-based radii in the 3D physical phantom was 0.074 mm (0.087 in-plane pixel units for image voxels of dimension 0.85 × 0.85 × 1.0 mm) over the 64 branches, representing vessel diameters ranging from 1.2 to 7 mm. The linear regression fit gave a slope of 1.056 and an R 2 value of 0.989. These three metrics reflect superior agreement of the radii estimates relative to previously published results over all sizes tested. Sizing via matched Gaussian filters resulted in size underestimates of >33% over all three test vessels, while the tubularity-metric matching exhibited a sizing uncertainty of >50%. In the human chest CT data set, the vessel voxel intensity profiles with and without branch model optimization showed excellent agreement and improvement in the objective measure of image similarity. Gatortail has been demonstrated to be an automated, objective, accurate and robust method for sizing of vessels in 3D non-invasively from chest CT scans. We anticipate that Gatortail, an image-based approach to automatically compute estimates of blood vessel radii and trajectories from 3D medical images, will facilitate future quantitative evaluation of vascular response to disease and environmental insult and improve understanding of the biological mechanisms underlying vascular disease processes. © 2017 American Association of Physicists in Medicine.
Three-dimensional imaging using phase retrieval with two focus planes
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev
2016-03-01
This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.
NASA Astrophysics Data System (ADS)
Yuksel, Onur; Baran, Ismet; Ersoy, Nuri; Akkerman, Remko
2018-05-01
Process induced stresses inherently exist in fiber reinforced polymer composites particularly in thick parts due to the presence of non-uniform cure, shrinkage and thermal expansion/contraction during manufacturing. In order to increase the reliability and the performance of the composite materials, process models are developed to predict the residual stress formation. The accuracy of the process models is dependent on the geometrical (micro to macro), material and process parameters as well as the numerical implementation. Therefore, in order to have reliable process modelling framework, there is a need for validation and if necessary calibration of the developed models. This study focuses on measurement of the transverse residual stresses in a relatively thick pultruded profile (20×20 mm) made of glass/polyester. Process-induced residual stresses in the middle of the profile are examined with different techniques which have never been applied for transverse residual stresses in thick unidirectional composites. Hole drilling method with strain gage and digital image correlation are employed. Strain values measured from measurements are used in a finite element model (FEM) to simulate the hole drilling process and predict the residual stress level. The measured released strain is found to be approximately 180 μm/m from the strain gage. The tensile residual stress at the core of the profile is estimated approximately as 7-10 MPa. Proposed methods and measured values in this study will enable validation and calibration of the process models based on the residual stresses.
Application of implicit attitude measures to the blood donation context.
Warfel, Regina M; France, Christopher R; France, Janis L
2012-02-01
Past blood donation research has relied on explicit (self-report) measures to understand blood donation motivations, but has not yet considered the inherent implicit or automatic processing involved in decision-making. This study addresses this limitation by introducing and validating two novel implicit measures of blood donation attitudes. Healthy young adults (n = 253) performed both image and word versions of a Single Target Implicit Association Test (ST-IAT) and then completed self-report measures of blood donation attitudes, blood and needle fears, social desirability, and donation intention. These results affirmed the validity of the blood donation ST-IATs in at least three ways. First, as expected, nondonors demonstrated more negative implicit donation attitudes than donors. Second, the implicit measures were significantly related in expected directions with explicit measures of donation attitudes as well as blood and needle fears. Finally, implicit donation attitudes were significantly related to donation intention, and the Image ST-IAT (but not the Word ST-IAT) significantly enhanced prediction of donation intention over and above needle fears and marginally enhanced prediction over and above blood fears. Image and word versions of the blood donation ST-IAT offer a valid method of assessing underlying automatic attitudes toward blood donation. © 2012 American Association of Blood Banks.
An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography
Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang
2014-01-01
As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735
McEvoy, Fintan J; Shen, Nicholas W; Nielsen, Dorte H; Buelund, Lene E; Holm, Peter
2017-02-01
Communicating radiological reports to peers has pedagogical value. Students may be uneasy with the process due to a lack of communication and peer review skills or to their failure to see value in the process. We describe a communication exercise with peer review in an undergraduate veterinary radiology course. The computer code used to manage the course and deliver images online is reported, and we provide links to the executable files. We tested to see if undergraduate peer review of radiological reports has validity and describe student impressions of the learning process. Peer review scores for student-generated radiological reports were compared to scores obtained in the summative multiple choice (MCQ) examination for the course. Student satisfaction was measured using a bespoke questionnaire. There was a weak positive correlation (Pearson correlation coefficient = 0.32, p < 0.01) between peer review scores students received and the student scores obtained in the MCQ examination. The difference in peer review scores received by students grouped according to their level of course performance (high vs. low) was statistically significant (p < 0.05). No correlation was found between peer review scores awarded by the students and the scores they obtained in the MCQ examination (Pearson correlation coefficient = 0.17, p = 0.14). In conclusion, we have created a realistic radiology imaging exercise with readily available software. The peer review scores are valid in that to a limited degree they reflect student future performance in an examination. Students valued the process of learning to communicate radiological findings but do not fully appreciated the value of peer review.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
NASA Technical Reports Server (NTRS)
Traub, W. A.
1984-01-01
The first physical demonstration of the principle of image reconstruction using a set of images from a diffraction-blurred elongated aperture is reported. This is an optical validation of previous theoretical and numerical simulations of the COSMIC telescope array (coherent optical system of modular imaging collectors). The present experiment utilizes 17 diffraction blurred exposures of a laboratory light source, as imaged by a lens covered by a narrow-slit aperture; the aperture is rotated 10 degrees between each exposure. The images are recorded in digitized form by a CCD camera, Fourier transformed, numerically filtered, and added; the sum is then filtered and inverse Fourier transformed to form the final image. The image reconstruction process is found to be stable with respect to uncertainties in values of all physical parameters such as effective wavelength, rotation angle, pointing jitter, and aperture shape. Future experiments will explore the effects of low counting rates, autoguiding on the image, various aperture configurations, and separated optics.
Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging
Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao
2016-01-01
Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114
Pigment network-based skin cancer detection.
Alfed, Naser; Khelifi, Fouad; Bouridane, Ahmed; Seker, Huseyin
2015-08-01
Diagnosing skin cancer in its early stages is a challenging task for dermatologists given the fact that the chance for a patient's survival is higher and hence the process of analyzing skin images and making decisions should be time efficient. Therefore, diagnosing the disease using automated and computerized systems has nowadays become essential. This paper proposes an efficient system for skin cancer detection on dermoscopic images. It has been shown that the statistical characteristics of the pigment network, extracted from the dermoscopic image, could be used as efficient discriminating features for cancer detection. The proposed system has been assessed on a dataset of 200 dermoscopic images of the `Hospital Pedro Hispano' [1] and the results of cross-validation have shown high detection accuracy.
American Alcohol Photo Stimuli (AAPS): A standardized set of alcohol and matched non-alcohol images.
Stauffer, Christopher S; Dobberteen, Lily; Woolley, Joshua D
2017-11-01
Photographic stimuli are commonly used to assess cue reactivity in the research and treatment of alcohol use disorder. The stimuli used are often non-standardized, not properly validated, and poorly controlled. There are no previously published, validated, American-relevant sets of alcohol images created in a standardized fashion. We aimed to: 1) make available a standardized, matched set of photographic alcohol and non-alcohol beverage stimuli, 2) establish face validity, the extent to which the stimuli are subjectively viewed as what they are purported to be, and 3) establish construct validity, the degree to which a test measures what it claims to be measuring. We produced a standardized set of 36 images consisting of American alcohol and non-alcohol beverages matched for basic color, form, and complexity. A total of 178 participants (95 male, 82 female, 1 genderqueer) rated each image for appetitiveness. An arrow-probe task, in which matched pairs were categorized after being presented for 200 ms, assessed face validity. Criteria for construct validity were met if variation in AUDIT scores were associated with variation in performance on tasks during alcohol image presentation. Overall, images were categorized with >90% accuracy. Participants' AUDIT scores correlated significantly with alcohol "want" and "like" ratings [r(176) = 0.27, p = <0.001; r(176) = 0.36, p = <0.001] and arrow-probe latency [r(176) = -0.22, p = 0.004], but not with non-alcohol outcomes. Furthermore, appetitive ratings and arrow-probe latency for alcohol, but not non-alcohol, differed significantly for heavy versus light drinkers. Our image set provides valid and reliable alcohol stimuli for both explicit and implicit tests of cue reactivity. The use of standardized, validated, reliable image sets may improve consistency across research and treatment paradigms.
Moreno-Martínez, Francisco Javier; Montoro, Pedro R
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K
2016-08-01
Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Normalized Polarization Ratios for the Analysis of Cell Polarity
Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.
2014-01-01
The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926
NASA Astrophysics Data System (ADS)
Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.
2013-10-01
Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.
Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-05-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT
Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-01-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887
Storelli, L; Pagani, E; Rocca, M A; Horsfield, M A; Gallo, A; Bisecco, A; Battaglini, M; De Stefano, N; Vrenken, H; Thomas, D L; Mancini, L; Ropele, S; Enzinger, C; Preziosa, P; Filippi, M
2016-07-21
The automatic segmentation of MS lesions could reduce time required for image processing together with inter- and intraoperator variability for research and clinical trials. A multicenter validation of a proposed semiautomatic method for hyperintense MS lesion segmentation on dual-echo MR imaging is presented. The classification technique used is based on a region-growing approach starting from manual lesion identification by an expert observer with a final segmentation-refinement step. The method was validated in a cohort of 52 patients with relapsing-remitting MS, with dual-echo images acquired in 6 different European centers. We found a mathematic expression that made the optimization of the method independent of the need for a training dataset. The automatic segmentation was in good agreement with the manual segmentation (dice similarity coefficient = 0.62 and root mean square error = 2 mL). Assessment of the segmentation errors showed no significant differences in algorithm performance between the different MR scanner manufacturers (P > .05). The method proved to be robust, and no center-specific training of the algorithm was required, offering the possibility for application in a clinical setting. Adoption of the method should lead to improved reliability and less operator time required for image analysis in research and clinical trials in MS. © 2016 American Society of Neuroradiology.
Farris, Dominic James; Lichtwark, Glen A
2016-05-01
Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, "UltraTrack". UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Angular relational signature-based chest radiograph image view classification.
Santosh, K C; Wendling, Laurent
2018-01-22
In a computer-aided diagnosis (CAD) system, especially for chest radiograph or chest X-ray (CXR) screening, CXR image view information is required. Automatically separating CXR image view, frontal and lateral can ease subsequent CXR screening process, since the techniques may not equally work for both views. We present a novel technique to classify frontal and lateral CXR images, where we introduce angular relational signature through force histogram to extract features and apply three different state-of-the-art classifiers: multi-layer perceptron, random forest, and support vector machine to make a decision. We validated our fully automatic technique on a set of 8100 images hosted by the U.S. National Library of Medicine (NLM), National Institutes of Health (NIH), and achieved an accuracy close to 100%. Our method outperforms the state-of-the-art methods in terms of processing time (less than or close to 2 s for the whole test data) while the accuracies can be compared, and therefore, it justifies its practicality. Graphical Abstract Interpreting chest X-ray (CXR) through the angular relational signature.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
A Robust Actin Filaments Image Analysis Framework
Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem
2016-01-01
The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746
Switching non-local vector median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2016-04-01
This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.
NASA Technical Reports Server (NTRS)
Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2001-01-01
Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.
Thermographic imaging of the space shuttle during re-entry using a near-infrared sensor
NASA Astrophysics Data System (ADS)
Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; Tack, Steve; Bush, Brett C.; Dantowitz, Ronald F.; Kozubal, Marek J.
2012-06-01
High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter's hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA's next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness.
NASA Astrophysics Data System (ADS)
Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.
2017-04-01
Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.
de Castro, Alberto; Rosales, Patricia; Marcos, Susana
2007-03-01
To measure tilt and decentration of intraocular lenses (IOLs) with Scheimpflug and Purkinje imaging systems in physical model eyes with known amounts of tilt and decentration and patients. Instituto de Optica Daza de Valdés, Consejo Superior de Investigaciones Científicas, Madrid, Spain. Measurements of IOL tilt and decentration were obtained using a commercial Scheimpflug system (Pentacam, Oculus), custom algorithms, and a custom-built Purkinje imaging apparatus. Twenty-five Scheimpflug images of the anterior segment of the eye were obtained at different meridians. Custom algorithms were used to process the images (correction of geometrical distortion, edge detection, and curve fittings). Intraocular lens tilt and decentration were estimated by fitting sinusoidal functions to the projections of the pupillary axis and IOL axis in each image. The Purkinje imaging system captures pupil images showing reflections of light from the anterior corneal surface and anterior and posterior lens surfaces. Custom algorithms were used to detect the Purkinje image locations and estimate IOL tilt and decentration based on a linear system equation and computer eye models with individual biometry. Both methods were validated with a physical model eye in which IOL tilt and decentration can be set nominally. Twenty-one eyes of 12 patients with IOLs were measured with both systems. Measurements of the physical model eye showed an absolute discrepancy between nominal and measured values of 0.279 degree (Purkinje) and 0.243 degree (Scheimpflug) for tilt and 0.094 mm (Purkinje) and 0.228 mm (Scheimpflug) for decentration. In patients, the mean tilt was less than 2.6 degrees and the mean decentration less than 0.4 mm. Both techniques showed mirror symmetry between right eyes and left eyes for tilt around the vertical axis and for decentration in the horizontal axis. Both systems showed high reproducibility. Validation experiments on physical model eyes showed slightly higher accuracy with the Purkinje method than the Scheimpflug imaging method. Horizontal measurements of patients with both techniques were highly correlated. The IOLs tended to be tilted and decentered nasally in most patients.
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2016-08-01
Usually, incongruent flanker stimuli provoke conflict processing whereas congruent flankers should facilitate task performance. Various behavioral studies reported improved or even absent conflict processing with correctly oriented selective attention. In the present study we attempted to reinvestigate these behavioral effects and to disentangle neuronal activity patterns underlying the attentional cueing effect taking advantage of a combination of the high temporal resolution of Electroencephalographic (EEG) and the spatial resolution of functional magnetic resonance imaging (fMRI). Data from 20 participants were acquired in different sessions per method. We expected the conflict-related N200 event-related potential (ERP) component and areas associated with flanker processing to show validity-specific modulations. Additionally, the spatio-temporal dynamics during cued flanker processing were examined using an fMRI-constrained source analysis approach. In the ERP data we found early differences in flanker processing between validity levels. An early centro-parietal relative positivity for incongruent stimuli occurred only with valid cueing during the N200 time window, while a subsequent fronto-central negativity was specific to invalidly cued interference processing. The source analysis additionally pointed to separate neural generators of these effects. Regional sources in visual areas were involved in conflict processing with valid cueing, while a regional source in the anterior cingulate cortex (ACC) seemed to contribute to the ERP differences with invalid cueing. Moreover, the ACC and precentral gyrus demonstrated an early and a late phase of congruency-related activity differences with invalid cueing. We discuss the first effect to reflect conflict detection and response activation while the latter more likely originated from conflict monitoring and control processes during response competition. Copyright © 2016 Elsevier Inc. All rights reserved.
Johnston-Peck, Aaron C; Winterstein, Jonathan P; Roberts, Alan D; DuChene, Joseph S; Qian, Kun; Sweeny, Brendan C; Wei, Wei David; Sharma, Renu; Stach, Eric A; Herzing, Andrew A
2016-03-01
Low-angle annular dark field (LAADF) scanning transmission electron microscopy (STEM) imaging is presented as a method that is sensitive to the oxidation state of cerium ions in CeO2 nanoparticles. This relationship was validated through electron energy loss spectroscopy (EELS), in situ measurements, as well as multislice image simulations. Static displacements caused by the increased ionic radius of Ce(3+) influence the electron channeling process and increase electron scattering to low angles while reducing scatter to high angles. This process manifests itself by reducing the high-angle annular dark field (HAADF) signal intensity while increasing the LAADF signal intensity in close proximity to Ce(3+) ions. This technique can supplement STEM-EELS and in so doing, relax the experimental challenges associated with acquiring oxidation state information at high spatial resolutions. Published by Elsevier B.V.
Adaptive hyperspectral imager: design, modeling, and control
NASA Astrophysics Data System (ADS)
McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine
2015-08-01
An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration
Pulmonary imaging using respiratory motion compensated simultaneous PET/MR
Dutta, Joyita; Huang, Chuan; Li, Quanzheng; El Fakhri, Georges
2015-01-01
Purpose: Pulmonary positron emission tomography (PET) imaging is confounded by blurring artifacts caused by respiratory motion. These artifacts degrade both image quality and quantitative accuracy. In this paper, the authors present a complete data acquisition and processing framework for respiratory motion compensated image reconstruction (MCIR) using simultaneous whole body PET/magnetic resonance (MR) and validate it through simulation and clinical patient studies. Methods: The authors have developed an MCIR framework based on maximum a posteriori or MAP estimation. For fast acquisition of high quality 4D MR images, the authors developed a novel Golden-angle RAdial Navigated Gradient Echo (GRANGE) pulse sequence and used it in conjunction with sparsity-enforcing k-t FOCUSS reconstruction. The authors use a 1D slice-projection navigator signal encapsulated within this pulse sequence along with a histogram-based gate assignment technique to retrospectively sort the MR and PET data into individual gates. The authors compute deformation fields for each gate via nonrigid registration. The deformation fields are incorporated into the PET data model as well as utilized for generating dynamic attenuation maps. The framework was validated using simulation studies on the 4D XCAT phantom and three clinical patient studies that were performed on the Biograph mMR, a simultaneous whole body PET/MR scanner. Results: The authors compared MCIR (MC) results with ungated (UG) and one-gate (OG) reconstruction results. The XCAT study revealed contrast-to-noise ratio (CNR) improvements for MC relative to UG in the range of 21%–107% for 14 mm diameter lung lesions and 39%–120% for 10 mm diameter lung lesions. A strategy for regularization parameter selection was proposed, validated using XCAT simulations, and applied to the clinical studies. The authors’ results show that the MC image yields 19%–190% increase in the CNR of high-intensity features of interest affected by respiratory motion relative to UG and a 6%–51% increase relative to OG. Conclusions: Standalone MR is not the traditional choice for lung scans due to the low proton density, high magnetic susceptibility, and low T2∗ relaxation time in the lungs. By developing and validating this PET/MR pulmonary imaging framework, the authors show that simultaneous PET/MR, unique in its capability of combining structural information from MR with functional information from PET, shows promise in pulmonary imaging. PMID:26133621
Pulmonary imaging using respiratory motion compensated simultaneous PET/MR.
Dutta, Joyita; Huang, Chuan; Li, Quanzheng; El Fakhri, Georges
2015-07-01
Pulmonary positron emission tomography (PET) imaging is confounded by blurring artifacts caused by respiratory motion. These artifacts degrade both image quality and quantitative accuracy. In this paper, the authors present a complete data acquisition and processing framework for respiratory motion compensated image reconstruction (MCIR) using simultaneous whole body PET/magnetic resonance (MR) and validate it through simulation and clinical patient studies. The authors have developed an MCIR framework based on maximum a posteriori or MAP estimation. For fast acquisition of high quality 4D MR images, the authors developed a novel Golden-angle RAdial Navigated Gradient Echo (GRANGE) pulse sequence and used it in conjunction with sparsity-enforcing k-t FOCUSS reconstruction. The authors use a 1D slice-projection navigator signal encapsulated within this pulse sequence along with a histogram-based gate assignment technique to retrospectively sort the MR and PET data into individual gates. The authors compute deformation fields for each gate via nonrigid registration. The deformation fields are incorporated into the PET data model as well as utilized for generating dynamic attenuation maps. The framework was validated using simulation studies on the 4D XCAT phantom and three clinical patient studies that were performed on the Biograph mMR, a simultaneous whole body PET/MR scanner. The authors compared MCIR (MC) results with ungated (UG) and one-gate (OG) reconstruction results. The XCAT study revealed contrast-to-noise ratio (CNR) improvements for MC relative to UG in the range of 21%-107% for 14 mm diameter lung lesions and 39%-120% for 10 mm diameter lung lesions. A strategy for regularization parameter selection was proposed, validated using XCAT simulations, and applied to the clinical studies. The authors' results show that the MC image yields 19%-190% increase in the CNR of high-intensity features of interest affected by respiratory motion relative to UG and a 6%-51% increase relative to OG. Standalone MR is not the traditional choice for lung scans due to the low proton density, high magnetic susceptibility, and low T2 (∗) relaxation time in the lungs. By developing and validating this PET/MR pulmonary imaging framework, the authors show that simultaneous PET/MR, unique in its capability of combining structural information from MR with functional information from PET, shows promise in pulmonary imaging.
Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system
NASA Astrophysics Data System (ADS)
Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.
2018-03-01
Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.
Moderate Resolution Imaging Spectroradiometer (MODIS) Overview
,
2008-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) is an instrument that collects remotely sensed data used by scientists for monitoring, modeling, and assessing the effects of natural processes and human actions on the Earth's surface. The continual calibration of the MODIS instruments, the refinement of algorithms used to create higher-level products, and the ongoing product validation make MODIS images a valuable time series (2000-present) of geophysical and biophysical land-surface measurements. Carried on two National Aeronautics and Space Administration (NASA) Earth Observing System (EOS) satellites, MODIS acquires morning (EOS-Terra) and afternoon (EOS-Aqua) views almost daily. Terra data acquisitions began in February 2000 and Aqua data acquisitions began in July 2002. Land data are generated only as higher-level products, removing the burden of common types of data processing from the user community. MODIS-based products describing ecological dynamics, radiation budget, and land cover are projected onto a sinusoidal mapping grid and distributed as 10- by 10-degree tiles at 250-, 500-, or 1,000-meter spatial resolution. Some products are also created on a 0.05-degree geographic grid to support climate modeling studies. All MODIS products are distributed in the Hierarchical Data Format-Earth Observing System (HDF-EOS) file format and are available through file transfer protocol (FTP) or on digital video disc (DVD) media. Versions 4 and 5 of MODIS land data products are currently available and represent 'validated' collections defined in stages of accuracy that are based on the number of field sites and time periods for which the products have been validated. Version 5 collections incorporate the longest time series of both Terra and Aqua MODIS data products.
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2016-01-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473
NASA Astrophysics Data System (ADS)
Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long
2012-01-01
The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.
CLINICAL SIGNS AND SYMPTOMS OF SEXUALLY TRANSMITTED INFECTIONS COMMUNICATED IN LIBRAS.
França, Inacia Sátiro Xavier de; Magalhães, Isabella Medeiros de Oliveira; Sousa, Francisco Stélio de; Coura, Alexsandro Silva; Silva, Arthur Felipe Rodrigues; Baptista, Rosilene Santos
2016-01-01
To validate a video containing image representations of clinical signs and symptoms of sexually transmitted infections expressed in Libras. Methodology development study conducted in an audio communication school. Thirty-six deaf people were selected. A video containing image representations of clinical signs and symptoms of sexually transmitted infections expressed in Libras was produced. Semantic validation was performed by deaf students and content validation by three judges who are Libras experts. The validation results were subjected to the Content Validity Index, where an index score > 0.80/80% was considered as agreement among judges. Seven signs and symptoms related to sexually transmitted infections were validated and obtained satisfactory Content Validity Indexes, most of them with 100% representativeness and agreement. The validation process made the expressions of signs and symptoms related to sexually transmitted infections represented in Libras valid for establishing effective communication in the area of the study, turning it into a care tool that facilitates and standardizes communication with deaf people through Libras. Validar um vídeo contendo as representações imagéticas de sinais e sintomas clínicos de infecções sexualmente transmissíveis expressas em Libras. Estudo de desenvolvimento metodológico, realizado em uma escola de audiocomunicação. Selecionou-se uma amostra de 36 surdos. Elaborou-se um vídeo contendo a representação imagética de sinais e sintomas de infecções sexualmente transmissíveis expressos em Libras. A validação semântica foi realizada pelos surdos e a validação de conteúdo por três juízes experts em Libras. Os resultados da validação foram submetidos ao Índice de Validade de Conteúdo, considerando-se o escore do índice > 0,80/80% de concordância entre os juízes. Foram validados sete sinais e sintomas relacionados às infecções sexualmente transmissíveis que obtiveram Índices de Validade de Conteúdos satisfatórios e em sua maioria com 100% de representatividade e concordância. O processo de validação tornou válidas as expressões de sinais e sintomas relacionados às infecções sexualmente transmissíveis representadas em Libras para estabelecer, na região do estudo, uma comunicação eficiente, tornando-se uma ferramenta assistiva que permite facilidade e uniformidade na comunicação com os surdos por meio da Libras.
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Validating spatial structure in canopy water content using geostatistics
NASA Technical Reports Server (NTRS)
Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.
1995-01-01
Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.
Validity and reliability of a scale to measure genital body image.
Zielinski, Ruth E; Kane-Low, Lisa; Miller, Janis M; Sampselle, Carolyn
2012-01-01
Women's body image dissatisfaction extends to body parts usually hidden from view--their genitals. Ability to measure genital body image is limited by lack of valid and reliable questionnaires. We subjected a previously developed questionnaire, the Genital Self Image Scale (GSIS) to psychometric testing using a variety of methods. Five experts determined the content validity of the scale. Then using four participant groups, factor analysis was performed to determine construct validity and to identify factors. Further construct validity was established using the contrasting groups approach. Internal consistency and test-retest reliability was determined. Twenty one of 29 items were considered content valid. Two items were added based on expert suggestions. Factor analysis was undertaken resulting in four factors, identified as Genital Confidence, Appeal, Function, and Comfort. The revised scale (GSIS-20) included 20 items explaining 59.4% of the variance. Women indicating an interest in genital cosmetic surgery exhibited significantly lower scores on the GSIS-20 than those who did not. The final 20 item scale exhibited internal reliability across all sample groups as well as test-retest reliability. The GSIS-20 provides a measure of genital body image demonstrating reliability and validity across several populations of women.
Semantic orchestration of image processing services for environmental analysis
NASA Astrophysics Data System (ADS)
Ranisavljević, Élisabeth; Devin, Florent; Laffly, Dominique; Le Nir, Yannick
2013-09-01
In order to analyze environmental dynamics, a major process is the classification of the different phenomena of the site (e.g. ice and snow for a glacier). When using in situ pictures, this classification requires data pre-processing. Not all the pictures need the same sequence of processes depending on the disturbances. Until now, these sequences have been done manually, which restricts the processing of large amount of data. In this paper, we present how to realize a semantic orchestration to automate the sequencing for the analysis. It combines two advantages: solving the problem of the amount of processing, and diversifying the possibilities in the data processing. We define a BPEL description to express the sequences. This BPEL uses some web services to run the data processing. Each web service is semantically annotated using an ontology of image processing. The dynamic modification of the BPEL is done using SPARQL queries on these annotated web services. The results obtained by a prototype implementing this method validate the construction of the different workflows that can be applied to a large number of pictures.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Minervini, Massimo; Giuffrida, Mario V; Perata, Pierdomenico; Tsaftaris, Sotirios A
2017-04-01
Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com). © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.
Demonstration of automated proximity and docking technologies
NASA Astrophysics Data System (ADS)
Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.
An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
Multistage morphological segmentation of bright-field and fluorescent microscopy images
NASA Astrophysics Data System (ADS)
Korzyńska, A.; Iwanowski, M.
2012-06-01
This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
Comparison of mechanisms involved in image enhancement of Tissue Harmonic Imaging
NASA Astrophysics Data System (ADS)
Cleveland, Robin O.; Jing, Yuan
2006-05-01
Processes that have been suggested as responsible for the improved imaging in Tissue Harmonic Imaging (THI) include: 1) reduced sensitivity to reverberation, 2) reduced sensitivity to aberration, and 3) reduction in the amplitude of diffraction side lobes. A three-dimensional model of the forward propagation of nonlinear sound beams in media with arbitrary spatial properties (a generalized KZK equation) was developed and solved using a time-domain code. The numerical simulations were validated through experiments with tissue mimicking phantoms. The impact of aberration from tissue-like media was determined through simulations using three-dimensional maps of tissue properties derived from datasets available through the Visible Female Project. The experiments and simulations demonstrated that second harmonic imaging suffers less clutter from reverberation and side-lobes but is not immune to aberration effects. The results indicate that side lobe suppression is the most significant reason for the improvement of second harmonic imaging.
Multimodal imaging of cutaneous wound tissue
NASA Astrophysics Data System (ADS)
Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Ren, Wenqi; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald
2015-01-01
Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, few methods are available for simultaneous assessment of these tissue parameters in a noninvasive and quantitative fashion. We integrated hyperspectral, laser speckle, and thermographic imaging modalities in a single-experimental setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Algorithms were developed for appropriate coregistration between wound images acquired by different imaging modalities at different times. The multimodal wound imaging system was validated in an occlusion experiment, where oxygenation and perfusion maps of a healthy subject's upper extremity were continuously monitored during a postocclusive reactive hyperemia procedure and compared with standard measurements. The system was also tested in a clinical trial where a wound of three millimeters in diameter was introduced on a healthy subject's lower extremity and the healing process was continuously monitored. Our in vivo experiments demonstrated the clinical feasibility of multimodal cutaneous wound imaging.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1992-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.
Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-01-01
This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.
Quantum Watermarking Scheme Based on INEQR
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-04-01
Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1991-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia
2017-12-01
Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.
Pérez-Beteta, Julián; Molina-García, David; Ortiz-Alhambra, José A; Fernández-Romero, Antonio; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; Meléndez, Bárbara; Rodríguez de Lope, Ángel; Moreno de la Presa, Raquel; Iglesias Bayo, Lidia; Barcia, Juan A; Martino, Juan; Velásquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Revert, Antonio; Arana, Estanislao; Pérez-García, Víctor M
2018-07-01
Purpose To evaluate the prognostic and predictive value of surface-derived imaging biomarkers obtained from contrast material-enhanced volumetric T1-weighted pretreatment magnetic resonance (MR) imaging sequences in patients with glioblastoma multiforme. Materials and Methods A discovery cohort from five local institutions (165 patients; mean age, 62 years ± 12 [standard deviation]; 43% women and 57% men) and an independent validation cohort (51 patients; mean age, 60 years ± 12; 39% women and 61% men) from The Cancer Imaging Archive with volumetric T1-weighted pretreatment contrast-enhanced MR imaging sequences were included in the study. Clinical variables such as age, treatment, and survival were collected. After tumor segmentation and image processing, tumor surface regularity, measuring how much the tumor surface deviates from a sphere of the same volume, was obtained. Kaplan-Meier, Cox proportional hazards, correlations, and concordance indexes were used to compare variables and patient subgroups. Results Surface regularity was a powerful predictor of survival in the discovery (P = .005, hazard ratio [HR] = 1.61) and validation groups (P = .05, HR = 1.84). Multivariate analysis selected age and surface regularity as significant variables in a combined prognostic model (P < .001, HR = 3.05). The model achieved concordance indexes of 0.76 and 0.74 for the discovery and validation cohorts, respectively. Tumor surface regularity was a predictor of survival for patients who underwent complete resection (P = .01, HR = 1.90). Tumors with irregular surfaces did not benefit from total over subtotal resections (P = .57, HR = 1.17), but those with regular surfaces did (P = .004, HR = 2.07). Conclusion The surface regularity obtained from high-resolution contrast-enhanced pretreatment volumetric T1-weighted MR images is a predictor of survival in patients with glioblastoma. It may help in classifying patients for surgery. © RSNA, 2018 Online supplemental material is available for this article.
Pérez-Beteta, Julián; Molina-García, David; Ortiz-Alhambra, José A; Fernández-Romero, Antonio; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; Meléndez, Bárbara; Rodríguez de Lope, Ángel; Moreno de la Presa, Raquel; Iglesias Bayo, Lidia; Barcia, Juan A; Martino, Juan; Velásquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Revert, Antonio; Arana, Estanislao; Pérez-García, Víctor M
2018-04-03
Purpose To evaluate the prognostic and predictive value of surface-derived imaging biomarkers obtained from contrast material-enhanced volumetric T1-weighted pretreatment magnetic resonance (MR) imaging sequences in patients with glioblastoma multiforme. Materials and Methods A discovery cohort from five local institutions (165 patients; mean age, 62 years ± 12 [standard deviation]; 43% women and 57% men) and an independent validation cohort (51 patients; mean age, 60 years ± 12; 39% women and 61% men) from The Cancer Imaging Archive with volumetric T1-weighted pretreatment contrast-enhanced MR imaging sequences were included in the study. Clinical variables such as age, treatment, and survival were collected. After tumor segmentation and image processing, tumor surface regularity, measuring how much the tumor surface deviates from a sphere of the same volume, was obtained. Kaplan-Meier, Cox proportional hazards, correlations, and concordance indexes were used to compare variables and patient subgroups. Results Surface regularity was a powerful predictor of survival in the discovery (P = .005, hazard ratio [HR] = 1.61) and validation groups (P = .05, HR = 1.84). Multivariate analysis selected age and surface regularity as significant variables in a combined prognostic model (P < .001, HR = 3.05). The model achieved concordance indexes of 0.76 and 0.74 for the discovery and validation cohorts, respectively. Tumor surface regularity was a predictor of survival for patients who underwent complete resection (P = .01, HR = 1.90). Tumors with irregular surfaces did not benefit from total over subtotal resections (P = .57, HR = 1.17), but those with regular surfaces did (P = .004, HR = 2.07). Conclusion The surface regularity obtained from high-resolution contrast-enhanced pretreatment volumetric T1-weighted MR images is a predictor of survival in patients with glioblastoma. It may help in classifying patients for surgery. © RSNA, 2018 Online supplemental material is available for this article.
White blood cell segmentation by circle detection using electromagnetism-like optimization.
Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.
NASA Astrophysics Data System (ADS)
Villano, Michelangelo; Papathanassiou, Konstantinos P.
2011-03-01
The estimation of the local differential shift between synthetic aperture radar (SAR) images has proven to be an effective technique for monitoring glacier surface motion. As images acquired over glaciers by short wavelength SAR systems, such as TerraSAR-X, often suffer from a lack of coherence, image features have to be exploited for the shift estimation (feature-tracking).The present paper addresses feature-tracking with special attention to the feasibility requirements and the achievable accuracy of the shift estimation. In particular, the dependence of the performance on image characteristics, such as texture parameters, signal-to-noise ratio (SNR) and resolution, as well as on processing techniques (despeckling, normalised cross-correlation versus maximum likelihood estimation) is analysed by means of Monte-Carlo simulations. TerraSAR-X data acquired over the Helheim glacier, Greenland, and the Aletsch glacier, Switzerland, have been processed to validate the simulation results.Feature-tracking can benefit of the availability of fully-polarimetric data. As some image characteristics, in fact, are polarisation-dependent, the selection of an optimum polarisation leads to improved performance. Furthermore, fully-polarimetric SAR images can be despeckled without degrading the resolution, so that additional (smaller-scale) features can be exploited.
White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization
Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713
Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario
2017-06-01
The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.
Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.
2015-09-01
Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.
Bode, Stefan; Murawski, Carsten; Laham, Simon M.
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/. PMID:29364985
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I
2018-02-01
Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.
Resolution enhancement in integral microscopy by physical interpolation.
Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel
2015-08-01
Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.
Resolution enhancement in integral microscopy by physical interpolation
Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel
2015-01-01
Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens. PMID:26309749
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
High temperature x-ray micro-tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDowell, Alastair A., E-mail: aamacdowell@lbl.gov; Barnard, Harold; Parkinson, Dilworth Y.
2016-07-27
There is increasing demand for 3D micro-scale time-resolved imaging of samples in realistic - and in many cases extreme environments. The data is used to understand material response, validate and refine computational models which, in turn, can be used to reduce development time for new materials and processes. Here we present the results of high temperature experiments carried out at the x-ray micro-tomography beamline 8.3.2 at the Advanced Light Source. The themes involve material failure and processing at temperatures up to 1750°C. The experimental configurations required to achieve the requisite conditions for imaging are described, with examples of ceramic matrixmore » composites, spacecraft ablative heat shields and nuclear reactor core Gilsocarbon graphite.« less
NASA Astrophysics Data System (ADS)
Dake, Fumihiro; Fukutake, Naoki; Hayashi, Seri; Taki, Yusuke
2018-02-01
We proposed superresolution nonlinear fluorescence microscopy with pump-probe setup that utilizes repetitive stimulated absorption and stimulated emission caused by two-color laser beams. The resulting nonlinear fluorescence that undergoes such a repetitive stimulated transition is detectable as a signal via the lock-in technique. As the nonlinear fluorescence signal is produced by the multi-ply combination of incident beams, the optical resolution can be improved. A theoretical model of the nonlinear optical process is provided using rate equations, which offers phenomenological interpretation of nonlinear fluorescence and estimation of the signal properties. The proposed method is demonstrated as having the scalability of optical resolution. Theoretical resolution and bead image are also estimated to validate the experimental result.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data
Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.
2016-04-06
An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less
CNTRICS Imaging Biomarkers Final Task Selection: Long-Term Memory and Reinforcement Learning
Ragland, John D.; Cohen, Neal J.; Cools, Roshan; Frank, Michael J.; Hannula, Deborah E.; Ranganath, Charan
2012-01-01
Functional imaging paradigms hold great promise as biomarkers for schizophrenia research as they can detect altered neural activity associated with the cognitive and emotional processing deficits that are so disabling to this patient population. In an attempt to identify the most promising functional imaging biomarkers for research on long-term memory (LTM), the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative selected “item encoding and retrieval,” “relational encoding and retrieval,” and “reinforcement learning” as key LTM constructs to guide the nomination process. This manuscript reports on the outcome of the third CNTRICS biomarkers meeting in which nominated paradigms in each of these domains were discussed by a review panel to arrive at a consensus on which of the nominated paradigms could be recommended for immediate translational development. After briefly describing this decision process, information is presented from the nominating authors describing the 4 functional imaging paradigms that were selected for immediate development. In addition to describing the tasks, information is provided on cognitive and neural construct validity, sensitivity to behavioral or pharmacological manipulations, availability of animal models, psychometric characteristics, effects of schizophrenia, and avenues for future development. PMID:22102094
Arrigoni, Simone; Turra, Giovanni; Signoroni, Alberto
2017-09-01
With the rapid diffusion of Full Laboratory Automation systems, Clinical Microbiology is currently experiencing a new digital revolution. The ability to capture and process large amounts of visual data from microbiological specimen processing enables the definition of completely new objectives. These include the direct identification of pathogens growing on culturing plates, with expected improvements in rapid definition of the right treatment for patients affected by bacterial infections. In this framework, the synergies between light spectroscopy and image analysis, offered by hyperspectral imaging, are of prominent interest. This leads us to assess the feasibility of a reliable and rapid discrimination of pathogens through the classification of their spectral signatures extracted from hyperspectral image acquisitions of bacteria colonies growing on blood agar plates. We designed and implemented the whole data acquisition and processing pipeline and performed a comprehensive comparison among 40 combinations of different data preprocessing and classification techniques. High discrimination performance has been achieved also thanks to improved colony segmentation and spectral signature extraction. Experimental results reveal the high accuracy and suitability of the proposed approach, driving the selection of most suitable and scalable classification pipelines and stimulating clinical validations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Image pre-processing method for near-wall PIV measurements over moving curved interfaces
NASA Astrophysics Data System (ADS)
Jia, L. C.; Zhu, Y. D.; Jia, Y. X.; Yuan, H. J.; Lee, C. B.
2017-03-01
PIV measurements near a moving interface are always difficult. This paper presents a PIV image pre-processing method that returns high spatial resolution velocity profiles near the interface. Instead of re-shaping or re-orientating the interrogation windows, interface tracking and an image transformation are used to stretch the particle image strips near a curved interface into rectangles. Then the adaptive structured interrogation windows can be arranged at specified distances from the interface. Synthetic particles are also added into the solid region to minimize interfacial effects and to restrict particles on both sides of the interface. Since a high spatial resolution is only required in high velocity gradient region, adaptive meshing and stretching of the image strips in the normal direction is used to improve the cross-correlation signal-to-noise ratio (SN) by reducing the velocity difference and the particle image distortion within the interrogation window. A two dimensional Gaussian fit is used to compensate for the effects of stretching particle images. The working hypothesis is that fluid motion near the interface is ‘quasi-tangential flow’, which is reasonable in most fluid-structure interaction scenarios. The method was validated against the window deformation iterative multi-grid scheme (WIDIM) using synthetic image pairs with different velocity profiles. The method was tested for boundary layer measurements of a supersonic turbulent boundary layer on a flat plate, near a rotating blade and near a flexible flapping flag. This image pre-processing method provides higher spatial resolution than conventional WIDIM and good robustness for measuring velocity profiles near moving interfaces.
Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring
Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao
2015-01-01
On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328
Perea Palazón, R J; Ortiz Pérez, J T; Prat González, S; de Caralt Robira, T M; Cibeira López, M T; Solé Arqués, M
2016-01-01
The development of myocardial fibrosis is a common process in the appearance of ventricular dysfunction in many heart diseases. Magnetic resonance imaging makes it possible to accurately evaluate the structure and function of the heart, and its role in the macroscopic characterization of myocardial fibrosis by late enhancement techniques has been widely validated clinically. Recent studies have demonstrated that T1-mapping techniques can quantify diffuse myocardial fibrosis and the expansion of the myocardial extracellular space in absolute terms. However, further studies are necessary to validate the usefulness of this technique in the early detection of tissue remodeling at a time when implementing early treatment would improve a patient's prognosis. This article reviews the state of the art for T1 mapping of the myocardium, its clinical applications, and its limitations. Copyright © 2016 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A. J.
1990-01-01
The validation of sea ice products derived from the Special Sensor Microwave Imager (SSM/I) on board a DMSP platform is examined using data from the Landsat MSS and NOAA-AVHRR sensors. Image processing techniques for retrieving ice concentrations from each type of imagery are developed and results are intercompared to determine the ice parameter retrieval accuracy of the SSM/I NASA-Team algorithm. For case studies in the Beaufort Sea and East Greenland Sea, average retrieval errors of the SSM/I algorithm are between 1.7 percent for spring conditions and 4.3 percent during freeze up in comparison with Landsat derived ice concentrations. For a case study in the East Greenland Sea, SSM/I derived ice concentration in comparison with AVHRR imagery display a mean error of 9.6 percent.
PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabharwall, Piyush; Skifton, Richard; Stoots, Carl
2013-12-01
Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart ofmore » any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.« less
Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.
Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2017-06-01
Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry
NASA Astrophysics Data System (ADS)
Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng
2017-02-01
This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.
NASA Astrophysics Data System (ADS)
Masciotti, J.; Provenzano, F.; Papa, J.; Klose, A.; Hur, J.; Gu, X.; Yamashiro, D.; Kandel, J.; Hielscher, A. H.
2006-02-01
Small animal models are employed to simulate disease in humans and to study its progression, what factors are important to the disease process, and to study the disease treatment. Biomedical imaging modalities such as magnetic resonance imaging (MRI) and Optical Tomography make it possible to non-invasively monitor the progression of diseases in living small animals and study the efficacy of drugs and treatment protocols. MRI is an established imaging modality capable of obtaining high resolution anatomical images and along with contrast agents allow the studying of blood volume. Optical tomography, on the other hand, is an emerging imaging modality, which, while much lower in spatial resolution, can separate the effects of oxyhemoglobin, deoxyhemoglobin, and blood volume with high temporal resolution. In this study we apply these modalities to imaging the growth of kidney tumors and then there treatment by an anti-VEGF agent. We illustrate how these imaging modalities have their individual uses, but can still supplement each other and cross validation can be performed.
MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images
NASA Astrophysics Data System (ADS)
Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.
2018-04-01
In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.
A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.
Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H
2016-06-01
Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.
Operational algorithm for ice-water classification on dual-polarized RADARSAT-2 images
NASA Astrophysics Data System (ADS)
Zakhvatkina, Natalia; Korosov, Anton; Muckenhuber, Stefan; Sandven, Stein; Babiker, Mohamed
2017-01-01
Synthetic Aperture Radar (SAR) data from RADARSAT-2 (RS2) in dual-polarization mode provide additional information for discriminating sea ice and open water compared to single-polarization data. We have developed an automatic algorithm based on dual-polarized RS2 SAR images to distinguish open water (rough and calm) and sea ice. Several technical issues inherent in RS2 data were solved in the pre-processing stage, including thermal noise reduction in HV polarization and correction of angular backscatter dependency in HH polarization. Texture features were explored and used in addition to supervised image classification based on the support vector machines (SVM) approach. The study was conducted in the ice-covered area between Greenland and Franz Josef Land. The algorithm has been trained using 24 RS2 scenes acquired in winter months in 2011 and 2012, and the results were validated against manually derived ice charts of the Norwegian Meteorological Institute. The algorithm was applied on a total of 2705 RS2 scenes obtained from 2013 to 2015, and the validation results showed that the average classification accuracy was 91 ± 4 %.
NASA Astrophysics Data System (ADS)
Kromp, Florian; Taschner-Mandl, Sabine; Schwarz, Magdalena; Blaha, Johanna; Weiss, Tamara; Ambros, Peter F.; Reiter, Michael
2015-02-01
We propose a user-driven method for the segmentation of neuroblastoma nuclei in microscopic fluorescence images involving the gradient energy tensor. Multispectral fluorescence images contain intensity and spatial information about antigene expression, fluorescence in situ hybridization (FISH) signals and nucleus morphology. The latter serves as basis for the detection of single cells and the calculation of shape features, which are used to validate the segmentation and to reject false detections. Accurate segmentation is difficult due to varying staining intensities and aggregated cells. It requires several (meta-) parameters, which have a strong influence on the segmentation results and have to be selected carefully for each sample (or group of similar samples) by user interactions. Because our method is designed for clinicians and biologists, who may have only limited image processing background, an interactive parameter selection step allows the implicit tuning of parameter values. With this simple but intuitive method, segmentation results with high precision for a large number of cells can be achieved by minimal user interaction. The strategy was validated on handsegmented datasets of three neuroblastoma cell lines.
Ibrahim, Reham S; Fathy, Hoda
2018-03-30
Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Tran, Daniel; Bellardo, John; Williams, Austin; Piug-Suari, Jordi; Crum, Gary; Flatley, Thomas
2012-01-01
The Intelligent Payload Experiment (IPEX) is a cubesat manifested for launch in October 2013 that will flight validate autonomous operations for onboard instrument processing and product generation for the Intelligent Payload Module (IPM) of the Hyperspectral Infra-red Imager (HyspIRI) mission concept. We first describe the ground and flight operations concept for HyspIRI IPM operations. We then describe the ground and flight operations concept for the IPEX mission and how that will validate HyspIRI IPM operations. We then detail the current status of the mission and outline the schedule for future development.
Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M
2015-10-01
New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Coffman, Marika C; Trubanova, Andrea; Richey, J Anthony; White, Susan W; Kim-Spoon, Jungmeen; Ollendick, Thomas H; Pine, Daniel S
2015-12-01
Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Automated liver elasticity calculation for 3D MRE
NASA Astrophysics Data System (ADS)
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-03-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.
Rozenbaum, O
2011-04-15
Understanding the weathering processes of building stones and more generally of their transfer properties requires detailed knowledge of the porosity characteristics. This study aims at analyzing three-dimensional images obtained by X-ray microtomography of building stones. In order to validate these new results a weathered limestone previously characterised (Rozenbaum et al., 2007) by two-dimensional image analysis was selected. The 3-D images were analysed by a set of mathematical tools that enable the description of the pore and solid phase distribution. Results show that 3-D image analysis is a powerful technique to characterise the morphological, structural and topological differences due to weathering. The paper also discusses criteria for mathematically determining whether a stone is weathered or not. Copyright © 2011 Elsevier B.V. All rights reserved.
Qian, Chenggen; Chen, Yulei; Zhu, Sha; Yu, Jicheng; Zhang, Lei; Feng, Peijian; Tang, Xin; Hu, Quanyin; Sun, Wujin; Lu, Yue; Xiao, Xuanzhong; Shen, Qun-Dong; Gu, Zhen
2016-01-01
Stimuli-responsive and imaging-guided drug delivery systems hold vast promise for enhancement of therapeutic efficacy. Here we report an adenosine-5'-triphosphate (ATP)-responsive and near-infrared (NIR)-emissive conjugated polymer-based nanocarrier for the controlled release of anticancer drugs and real-time imaging. We demonstrate that the conjugated polymeric nanocarriers functionalized with phenylboronic acid tags on surface as binding sites for ATP could be converted to the water-soluble conjugated polyelectrolytes in an ATP-rich environment, which promotes the disassembly of the drug carrier and subsequent release of the cargo. In vivo studies validate that this formulation exhibits promising capability for inhibition of tumor growth. We also evaluate the metabolism process by monitoring the fluorescence signal of the conjugated polymer through the in vivo NIR imaging.
Vorticity field measurement using digital inline holography
NASA Astrophysics Data System (ADS)
Mallery, Kevin; Hong, Jiarong
2017-11-01
We demonstrate the direct measurement of a 3D vorticity field using digital inline holographic microscopy. Microfiber tracer particles are illuminated with a 532 nm continuous diode laser and imaged using a single CCD camera. The recorded holographic images are processed using a GPU-accelerated inverse problem approach to reconstruct the 3D structure of each microfiber in the imaged volume. The translation and rotation of each microfiber are measured using a time-resolved image sequence - yielding velocity and vorticity point measurements. The accuracy and limitations of this method are investigated using synthetic holograms. Measurements of solid body rotational flow are used to validate the accuracy of the technique under known flow conditions. The technique is further applied to a practical turbulent flow case for investigating its 3D velocity field and vorticity distribution.
Guo, Lu; Wang, Gang; Feng, Yuanming; Yu, Tonggang; Guo, Yu; Bai, Xu; Ye, Zhaoxiang
2016-09-21
Accurate target volume delineation is crucial for the radiotherapy of tumors. Diffusion and perfusion magnetic resonance imaging (MRI) can provide functional information about brain tumors, and they are able to detect tumor volume and physiological changes beyond the lesions shown on conventional MRI. This review examines recent studies that utilized diffusion and perfusion MRI for tumor volume definition in radiotherapy of brain tumors, and it presents the opportunities and challenges in the integration of multimodal functional MRI into clinical practice. The results indicate that specialized and robust post-processing algorithms and tools are needed for the precise alignment of targets on the images, and comprehensive validations with more clinical data are important for the improvement of the correlation between histopathologic results and MRI parameter images.
Jiang, Lide; Wang, Menghua
2013-09-20
A new flag/masking scheme has been developed for identifying stray light and cloud shadow pixels that significantly impact the quality of satellite-derived ocean color products. Various case studies have been carried out to evaluate the performance of the new cloud contamination flag/masking scheme on ocean color products derived from the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP). These include direct visual assessments, detailed quantitative case studies, objective statistic analyses, and global image examinations and comparisons. The National Oceanic and Atmospheric Administration (NOAA) Multisensor Level-1 to Level-2 (NOAA-MSL12) ocean color data processing system has been used in the study. The new stray light and cloud shadow identification method has been shown to outperform the current stray light flag in both valid data coverage and data quality of satellite-derived ocean color products. In addition, some cloud-related flags from the official VIIRS-SNPP data processing software, i.e., the Interface Data Processing System (IDPS), have been assessed. Although the data quality with the IDPS flags is comparable to that of the new flag implemented in the NOAA-MSL12 ocean color data processing system, the valid data coverage from the IDPS is significantly less than that from the NOAA-MSL12 using the new stray light and cloud shadow flag method. Thus, the IDPS flag/masking algorithms need to be refined and modified to reduce the pixel loss, e.g., the proposed new cloud contamination flag/masking can be implemented in IDPS VIIRS ocean color data processing.
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando
2016-04-01
The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.
Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes.
Pinto-Lopera, Jesús Emilio; S T Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo
2016-09-15
Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch's t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system.
Acne image analysis: lesion localization and classification
NASA Astrophysics Data System (ADS)
Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.
2016-03-01
Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.
Implementing a prototyping network for injection moulded imaging lenses in Finland
NASA Astrophysics Data System (ADS)
Keränen, K.; Mäkinen, J.-T.; Pääkkönen, E. J.; Koponen, M.; Karttunen, M.; Hiltunen, J.; Karioja, P.
2005-10-01
A network for prototyping imaging lenses using injection moulding was established in Finland. The network consists of several academic and industrial partners capable of designing, processing and characterising imaging lenses produced by injection moulding technology. In order to validate the operation of the network a demonstrator lens was produced. The process steps included in the manufacturing were lens specification, designing and modelling, material selection, mould tooling, moulding process simulation, injection moulding and characterisation. A magnifying imaging singlet lens to be used as an add-on in a camera phone was selected as a demonstrator. The design of the add-on lens proved to be somewhat challenging, but a double aspheric singlet lens design fulfilling nearly the requirement specification was produced. In the material selection task the overall characteristics profile of polymethyl methacrylate (PMMA) material was seen to be the most fitting to the pilot case. It is a low cost material with good moulding properties and therefore it was selected as a material for the pilot lens. Lens mould design was performed using I-DEAS and tested by using MoldFlow 3D injection moulding simulation software. The simulations predicted the achievable lens quality in the processing, when using a two-cavity mould design. First cavity was tooled directly into the mould plate and the second cavity was made by tooling separate insert pieces for the mould. Mould material was steel and the inserts were made from Moldmax copper alloy. Parts were tooled with high speed milling machines. Insert pieces were hand polished after tooling. Prototype lenses were injection moulded using two PMMA grades, namely 6N and 7N. Different process parameters were also experimented in the injection moulding test runs. Prototypes were characterised by measuring mechanical dimensions, surface profile, roughness and MTF of the lenses. Characterisations showed that the lens surface RMS roughness was 30-50 nm and the profile deviation was 5 μm from the design at a distance of 0.3 mm from the lens vertex. These manufacturing defects caused that the measured MTF values were lower than designed. The lens overall quality, however, was adequate to demonstrate the concept successfully. Through the implementation of the demonstrator lens we could test effectively different stages of the manufacturing process and get information about process component weight and risk factors and validate the overall performance of the network.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Chang, Herng-Hua; Chang, Yu-Ning
2017-04-01
Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Luk, Alex T.; Lin, Yuting; Grimmond, Brian; Sood, Anup; Uzgiris, Egidijus E.; Nalcioglu, Orhan; Gulsen, Gultekin
2013-03-01
Since diffuse optical tomography (DOT) is a low spatial resolution modality, it is desirable to validate its quantitative accuracy with another well-established imaging modality, such as magnetic resonance imaging (MRI). In this work, we have used a polymer based bi-functional MRI-optical contrast agent (Gd-DTPA-polylysine-IR800) in collaboration with GE Global Research. This multi-modality contrast agent provided not only co-localization but also the same kinetics, to cross-validate two imaging modalities. Bi-functional agents are injected to the rats and pharmacokinetics at the bladder are recovered using both optical and MR imaging. DOT results are validated using MRI results as "gold standard"
Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur
2016-12-22
Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.
Effect of image quality on calcification detection in digital mammography
Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.
2012-01-01
Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. PMID:22755704
Effect of image quality on calcification detection in digital mammography.
Warren, Lucy M; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M; Wallis, Matthew G; Chakraborty, Dev P; Dance, David R; Bosmans, Hilde; Young, Kenneth C
2012-06-01
This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. © 2012 American Association of Physicists in Medicine.
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Mishchenko, Yuriy
2009-01-30
We describe an approach for automation of the process of reconstruction of neural tissue from serial section transmission electron micrographs. Such reconstructions require 3D segmentation of individual neuronal processes (axons and dendrites) performed in densely packed neuropil. We first detect neuronal cell profiles in each image in a stack of serial micrographs with multi-scale ridge detector. Short breaks in detected boundaries are interpolated using anisotropic contour completion formulated in fuzzy-logic framework. Detected profiles from adjacent sections are linked together based on cues such as shape similarity and image texture. Thus obtained 3D segmentation is validated by human operators in computer-guided proofreading process. Our approach makes possible reconstructions of neural tissue at final rate of about 5 microm3/manh, as determined primarily by the speed of proofreading. To date we have applied this approach to reconstruct few blocks of neural tissue from different regions of rat brain totaling over 1000microm3, and used these to evaluate reconstruction speed, quality, error rates, and presence of ambiguous locations in neuropil ssTEM imaging data.
Detection of brain tumor margins using optical coherence tomography
NASA Astrophysics Data System (ADS)
Juarez-Chambi, Ronald M.; Kut, Carmen; Rico-Jimenez, Jesus; Campos-Delgado, Daniel U.; Quinones-Hinojosa, Alfredo; Li, Xingde; Jo, Javier
2018-02-01
In brain cancer surgery, it is critical to achieve extensive resection without compromising adjacent healthy, noncancerous regions. Various technological advances have made major contributions in imaging, including intraoperative magnetic imaging (MRI) and computed tomography (CT). However, these technologies have pros and cons in providing quantitative, real-time and three-dimensional (3D) continuous guidance in brain cancer detection. Optical Coherence Tomography (OCT) is a non-invasive, label-free, cost-effective technique capable of imaging tissue in three dimensions and real time. The purpose of this study is to reliably and efficiently discriminate between non-cancer and cancerinfiltrated brain regions using OCT images. To this end, a mathematical model for quantitative evaluation known as the Blind End-Member and Abundances Extraction method (BEAE). This BEAE method is a constrained optimization technique which extracts spatial information from volumetric OCT images. Using this novel method, we are able to discriminate between cancerous and non-cancerous tissues and using logistic regression as a classifier for automatic brain tumor margin detection. Using this technique, we are able to achieve excellent performance using an extensive cross-validation of the training dataset (sensitivity 92.91% and specificity 98.15%) and again using an independent, blinded validation dataset (sensitivity 92.91% and specificity 86.36%). In summary, BEAE is well-suited to differentiate brain tissue which could support the guiding surgery process for tissue resection.
Detection of brain tumor margins using optical coherence tomography
NASA Astrophysics Data System (ADS)
Juarez-Chambi, Ronald M.; Kut, Carmen; Rico-Jimenez, Jesus; Campos-Delgado, Daniel U.; Quinones-Hinojosa, Alfredo; Li, Xingde; Jo, Javier
2018-02-01
In brain cancer surgery, it is critical to achieve extensive resection without compromising adjacent healthy, non-cancerous regions. Various technological advances have made major contributions in imaging, including intraoperative magnetic imaging (MRI) and computed tomography (CT). However, these technologies have pros and cons in providing quantitative, real-time and three-dimensional (3D) continuous guidance in brain cancer detection. Optical Coherence Tomography (OCT) is a non-invasive, label-free, cost-effective technique capable of imaging tissue in three dimensions and real time. The purpose of this study is to reliably and efficiently discriminate between non-cancer and cancer-infiltrated brain regions using OCT images. To this end, a mathematical model for quantitative evaluation known as the Blind End- Member and Abundances Extraction method (BEAE). This BEAE method is a constrained optimization technique which extracts spatial information from volumetric OCT images. Using this novel method, we are able to discriminate between cancerous and non-cancerous tissues and using logistic regression as a classifier for automatic brain tumor margin detection. Using this technique, we are able to achieve excellent performance using an extensive cross-validation of the training dataset (sensitivity 92.91% and specificity 98.15%) and again using an independent, blinded validation dataset (sensitivity 92.91% and specificity 86.36%). In summary, BEAE is well-suited to differentiate brain tissue which could support the guiding surgery process for tissue resection.
Comparison and validation of point spread models for imaging in natural waters.
Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A
2008-06-23
It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.
On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; LeMaster, Daniel A.
2017-05-01
We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
In-situ Planetary Subsurface Imaging System
NASA Astrophysics Data System (ADS)
Song, W.; Weber, R. C.; Dimech, J. L.; Kedar, S.; Neal, C. R.; Siegler, M.
2017-12-01
Geophysical and seismic instruments are considered the most effective tools for studying the detailed global structures of planetary interiors. A planet's interior bears the geochemical markers of its evolutionary history, as well as its present state of activity, which has direct implications to habitability. On Earth, subsurface imaging often involves massive data collection from hundreds to thousands of geophysical sensors (seismic, acoustic, etc) followed by transfer by hard links or wirelessly to a central location for post processing and computing, which will not be possible in planetary environments due to imposed mission constraints on mass, power, and bandwidth. Emerging opportunities for geophysical exploration of the solar system from Venus to the icy Ocean Worlds of Jupiter and Saturn dictate that subsurface imaging of the deep interior will require substantial data reduction and processing in-situ. The Real-time In-situ Subsurface Imaging (RISI) technology is a mesh network that senses and processes geophysical signals. Instead of data collection then post processing, the mesh network performs the distributed data processing and computing in-situ, and generates an evolving 3D subsurface image in real-time that can be transmitted under bandwidth and resource constraints. Seismic imaging algorithms (including traveltime tomography, ambient noise imaging, and microseismic imaging) have been successfully developed and validated using both synthetic and real-world terrestrial seismic data sets. The prototype hardware system has been implemented and can be extended as a general field instrumentation platform tailored specifically for a wide variety of planetary uses, including crustal mapping, ice and ocean structure, and geothermal systems. The team is applying the RISI technology to real off-world seismic datasets. For example, the Lunar Seismic Profiling Experiment (LSPE) deployed during the Apollo 17 Moon mission consisted of four geophone instruments spaced up to 100 meters apart, which in essence forms a small aperture seismic network. A pattern recognition technique based on Hidden Markov Models was able to characterize this dataset, and we are exploring how the RISI technology can be adapted for this dataset.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
The development and validation of the Physical Appearance Comparison Scale-3 (PACS-3).
Schaefer, Lauren M; Thompson, J Kevin
2018-05-21
Appearance comparison processes are implicated in the development of body-image disturbance and disordered eating. The Physical Appearance Comparison Scale-Revised (PACS-R) assesses the simple frequency of appearance comparisons; however, research has suggested that other aspects of appearance comparisons (e.g., comparison direction) may moderate the association between comparisons and their negative outcomes. In the current study, the PACS-R was revised to examine aspects of comparisons with relevance to body-image and eating outcomes. Specifically, the measure was modified to examine (a) dimensions of physical appearance relevant to men and women (i.e., weight-shape, muscularity, and overall physical appearance), (b) comparisons with proximal and distal targets, (c) upward versus downward comparisons, and (d) the acute emotional impact of comparisons. The newly revised measure, labeled the PACS-3, along with existing measures of appearance comparison, body satisfaction, eating pathology, and self-esteem, was completed by 1,533 college men and women. Exploratory and confirmatory factor analyses were conducted to examine the factor structure of the PACS-3. In addition, the reliability, convergent validity, and incremental validity of the PACS-3 scores were examined. The final PACS-3 comprises 27 items and 9 subscales: Proximal: Frequency, Distal: Frequency, Muscular: Frequency, Proximal: Direction, Distal: Direction, Muscular: Direction, Proximal: Effect, Distal: Effect, and Muscular: Effect. the PACS-3 subscale scores demonstrated good reliability and convergent validity. Moreover, the PACS-3 subscales greatly improved the prediction of body satisfaction and disordered eating relative to existing measures of appearance comparison. Overall, the PACS-3 improves upon existing scales and offers a comprehensive assessment of appearance-comparison processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Evaluation of the deformation and corresponding dosimetric implications in prostate cancer treatment
NASA Astrophysics Data System (ADS)
Wen, Ning; Glide-Hurst, Carri; Nurushev, Teamour; Xing, Lei; Kim, Jinkoo; Zhong, Hualiang; Liu, Dezhi; Liu, Manju; Burmeister, Jay; Movsas, Benjamin; Chetty, Indrin J.
2012-09-01
The cone-beam computed tomography (CBCT) imaging modality is an integral component of image-guided adaptive radiation therapy (IGART), which uses patient-specific dynamic/temporal information for potential treatment plan modification. In this study, an offline process for the integral component IGART framework has been implemented that consists of deformable image registration (DIR) and its validation, dose reconstruction, dose accumulation and dose verification. This study compares the differences between planned and estimated delivered doses under an IGART framework of five patients undergoing prostate cancer radiation therapy. The dose calculation accuracy on CBCT was verified by measurements made in a Rando pelvic phantom. The accuracy of DIR on patient image sets was evaluated in three ways: landmark matching with fiducial markers, visual image evaluation and unbalanced energy (UE); UE has been previously demonstrated to be a feasible method for the validation of DIR accuracy at a voxel level. The dose calculated on each CBCT image set was reconstructed and accumulated over all fractions to reflect the ‘actual dose’ delivered to the patient. The deformably accumulated (delivered) plans were then compared to the original (static) plans to evaluate tumor and normal tissue dose discrepancies. The results support the utility of adaptive planning, which can be used to fully elucidate the dosimetric impact based on the simulated delivered dose to achieve the desired tumor control and normal tissue sparing, which may be of particular importance in the context of hypofractionated radiotherapy regimens.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
The visual communication in the optonometric scales.
Dantas, Rosane Arruda; Pagliuca, Lorita Marlena Freitag
2006-01-01
Communication through vision involves visual apprenticeship that demands ocular integrity, which results in the importance of the evaluation of visual acuity. The scale of images, formed by optotypes, is a method for the verification of visual acuity in kindergarten children. To identify the optotype the child needs to know the image in analysis. Given the importance of visual communication during the process of construction of the scale of images, one presents a bibliographic, analytical study aiming at thinking about the principles for the construction of those tables. One considers the draw inserted as an optotype as a non-verbal symbolic expression of the body and/or of the environment constructed based on the caption of experiences by the individual. One contests the indiscriminate use of images, for one understands that there must be previous knowledge. Despite the subjectivity of the optotypes, the scales continue valid if one adapts images to those of the universe of the children to be examined.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie
Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into halfmore » of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Ting; Kim, Sung; Goyal, Sharad
2010-01-15
Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less