Multisensor Image Analysis System
1993-04-15
AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald
Validation of Biomarkers for Prostate Cancer Prognosis
2014-12-01
and Muc1. We have also completed a project in image analysis of H & E slides with Gustavo Ayala at University of Texas. Finally, we have completed...Groups using the resource include Dr. Jeremy Squire, Dr. Gustavo Ayala, Tamara Lotan and Dr. Lidong Liu. • Porting final clinical data that will be...with a manuscript near completion. • Ongoing analysis of AZGP1 with a manuscript expected soon. • Ongoing analysis of image analysis with Gustavo Ayala
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
Prototype for Meta-Algorithmic, Content-Aware Image Analysis
2015-03-01
PROTOTYPE FOR META-ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS 5a. CONTRACT NUMBER FA8750-12-C-0181 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62305E 6. AUTHOR(S) S...approaches were studied in detail and their results on a sample dataset are presented. 15. SUBJECT TERMS Image Analysis , Computer Vision, Content
Digital image processing and analysis for activated sludge wastewater treatment.
Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed
2015-01-01
Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.
Paediatric x-ray radiation dose reduction and image quality analysis.
Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H
2013-09-01
Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.
Chen, Jia-Mei; Li, Yan; Xu, Jun; Gong, Lei; Wang, Lin-Wei; Liu, Wen-Lou; Liu, Juan
2017-03-01
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature-based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
An adhered-particle analysis system based on concave points
NASA Astrophysics Data System (ADS)
Wang, Wencheng; Guan, Fengnian; Feng, Lin
2018-04-01
Particles adhered together will influence the image analysis in computer vision system. In this paper, a method based on concave point is designed. First, corner detection algorithm is adopted to obtain a rough estimation of potential concave points after image segmentation. Then, it computes the area ratio of the candidates to accurately localize the final separation points. Finally, it uses the separation points of each particle and the neighboring pixels to estimate the original particles before adhesion and provides estimated profile images. The experimental results have shown that this approach can provide good results that match the human visual cognitive mechanism.
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
Preliminary analysis of Dione Regio, Venus: The final Magellan regional imaging gap
NASA Technical Reports Server (NTRS)
Keddie, S. T.
1993-01-01
In Sep. 1992, the Magellan spacecraft filled the final large gap in its coverage of Venus when it imaged an area west of Alpha Regio. F-BIDR's and some test MIDR's of parts of this area were available as of late December. Dione Regio was imaged by the Arecibo observatory and a preliminary investigation of Magellan images supports the interpretations made based on these earlier images: Dione Regio is a regional highland on which is superposed three large, very distinct volcanic edifices. The superior resolution and different viewing geometry of the Magellan images also clarified some uncertainties and revealed fascinating details about this region.
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Information granules in image histogram analysis.
Wieclawek, Wojciech
2018-04-01
A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computer analysis of arteriograms
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Armstrong, J. H.; Beckenbach, E. B.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.; Sanmarco, M. E.
1977-01-01
A computer system has been developed to quantify the degree of atherosclerosis in the human femoral artery. The analysis involves first scanning and digitizing angiographic film, then tracking the outline of the arterial image and finally computing the relative amount of roughness or irregularity in the vessel wall. The image processing system and method are described.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Mukherjee, Archana; Wickstrom, Eric
2009-01-01
This review briefly outlines the importance of molecular imaging, particularly imaging of endogenous gene expression for noninvasive genetic analysis of radiographic masses. The concept of antisense imaging agents and the advantages and challenges in the development of hybridization probes for in vivo imaging are described. An overview of the investigations on oncogene expression imaging is given. Finally, the need for further improvement in antisense-based imaging agents and directions to improve oncogene mRNA targeting is stated. PMID:19264436
Parallel architecture for rapid image generation and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nerheim, R.J.
1987-01-01
A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
Future Directions for Astronomical Image Display
NASA Technical Reports Server (NTRS)
Mandel, Eric
2000-01-01
In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.
A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image
NASA Astrophysics Data System (ADS)
Su, Junying
2011-11-01
A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
NASA Astrophysics Data System (ADS)
Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-02-01
Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.
A New Quantum Gray-Scale Image Encoding Scheme
NASA Astrophysics Data System (ADS)
Naseri, Mosayeb; Abdolmaleky, Mona; Parandin, Fariborz; Fatahi, Negin; Farouk, Ahmed; Nazari, Reza
2018-02-01
In this paper, a new quantum images encoding scheme is proposed. The proposed scheme mainly consists of four different encoding algorithms. The idea behind of the scheme is a binary key generated randomly for each pixel of the original image. Afterwards, the employed encoding algorithm is selected corresponding to the qubit pair of the generated randomized binary key. The security analysis of the proposed scheme proved its enhancement through both randomization of the generated binary image key and altering the gray-scale value of the image pixels using the qubits of randomized binary key. The simulation of the proposed scheme assures that the final encoded image could not be recognized visually. Moreover, the histogram diagram of encoded image is flatter than the original one. The Shannon entropies of the final encoded images are significantly higher than the original one, which indicates that the attacker can not gain any information about the encoded images. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, IRAN
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
Multivariate statistical analysis of low-voltage EDS spectrum images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, I.M.
1998-03-01
Whereas energy-dispersive X-ray spectrometry (EDS) has been used for compositional analysis in the scanning electron microscope for 30 years, the benefits of using low operating voltages for such analyses have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging and multivariate statistical analysis. The specimen analyzed for this study was a finished Intel Pentium processor, with the polyimide protective coating stripped off to expose the final active layers.
Systems engineering analysis of five 'as-manufactured' SXI telescopes
NASA Astrophysics Data System (ADS)
Harvey, James E.; Atanassova, Martina; Krywonos, Andrey
2005-09-01
Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.
a Cognitive Approach to Teaching a Graduate-Level Geobia Course
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel A.
2016-06-01
Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.
Automated analysis and classification of melanocytic tumor on skin whole slide images.
Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal
2018-06-01
This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bornstein, Michael M; Bingisser, Andreas C; Reichart, Peter A; Sendi, Pedram; Bosshardt, Dieter D; von Arx, Thomas
2015-06-01
The aim of this study was to evaluate the concordance of 2- and 3-dimensional radiography and histopathology in the diagnosis of periapical lesions. Patients were consecutively enrolled in this study provided that preoperative periapical radiography (PR) and cone-beam computed tomographic imaging of the tooth to be treated with apical surgery were performed. The periapical lesional tissue was histologically analyzed by 2 blinded examiners. The final histologic diagnosis was compared with the radiographic assessments of 4 blinded observers. The initial study material included 62 teeth in the same number of patients. Four lesions had to be excluded during processing, resulting in a final number of 58 evaluated cases (31 women and 27 men, mean age = 55 years). The final histologic diagnosis of the periapical lesions included 55 granulomas (94.8%) and 3 cysts (5.2%). Histologic analysis of the tissue samples from the apical lesions exhibited an almost perfect agreement between the 2 experienced investigators with an overall agreement of 94.83% (kappa = 0.8011). Radiographic assessment overestimated cysts by 28.4% (cone-beam computed tomographic imaging) and 20.7% (periapical radiography), respectively. Comparing the correlation of the radiographic diagnosis of 4 observers with the final histologic diagnosis, 2-dimensional (kappa = 0.104) and 3-dimensional imaging (kappa = 0.111) provided only minimum agreement. To establish a final diagnosis of an apical radiolucency, the tissue specimen should be evaluated histologically and specified as a granuloma (with/without epithelium) or a cyst. Analysis of 2-dimensional and 3-dimensional radiographic images alike results only in a tentative diagnosis that should be confirmed with biopsy. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K
2017-02-01
We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.
NASA Astrophysics Data System (ADS)
Huang, Wei; Ma, Chengfu; Chen, Yuhang
2014-12-01
A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.
[The application of stereology in radiology imaging and cell biology fields].
Hu, Na; Wang, Yan; Feng, Yuanming; Lin, Wang
2012-08-01
Stereology is an interdisciplinary method for 3D morphological study developed from mathematics and morphology. It is widely used in medical image analysis and cell biology studies. Because of its unbiased, simple, fast, reliable and non-invasive characteristics, stereology has been widely used in biomedical areas for quantitative analysis and statistics, such as histology, pathology and medical imaging. Because the stereological parameters show distinct differences in different pathology, many scholars use stereological methods to do quantitative analysis in their studies in recent years, for example, in the areas of the condition of cancer cells, tumor grade, disease development and the patient's prognosis, etc. This paper describes the stereological concept and estimation methods, also illustrates the applications of stereology in the fields of CT images, MRI images and cell biology, and finally reflects the universality, the superiority and reliability of stereology.
Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J
2016-11-01
The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Applications of independent component analysis in SAR images
NASA Astrophysics Data System (ADS)
Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping
2009-07-01
The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.
Optical design and system characterization of an imaging microscope at 121.6 nm
NASA Astrophysics Data System (ADS)
Gao, Weichuan; Finan, Emily; Kim, Geon-Hee; Kim, Youngsik; Milster, Thomas D.
2018-03-01
We present the optical design and system characterization of an imaging microscope prototype at 121.6 nm. System engineering processes are demonstrated through the construction of a Schwarzschild microscope objective, including tolerance analysis, fabrication, alignment, and testing. Further improvements on the as-built system with a correction phase plate are proposed and analyzed. Finally, the microscope assembly and the imaging properties of the prototype are demonstrated.
NASA Astrophysics Data System (ADS)
Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim
2014-04-01
This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.
Automated processing of zebrafish imaging data: a survey.
Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-09-01
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Automated Processing of Zebrafish Imaging Data: A Survey
Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-01-01
Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125
Automated rice leaf disease detection using color image analysis
NASA Astrophysics Data System (ADS)
Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.
2011-06-01
In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.
NASA Astrophysics Data System (ADS)
Sadeghipour, Negar; Davis, Scott C.; Tichauer, Kenneth M.
2018-02-01
Dynamic fluorescence imaging approaches can be used to estimate the concentration of cell surface receptors in vivo. Kinetic models are used to generate the final estimation by taking the targeted imaging agent concentration as a function of time. However, tissue absorption and scattering properties cause the final readout signal to be on a different scale than the real fluorescent agent concentration. In paired-agent imaging approaches, simultaneous injection of a suitable control imaging agent with a targeted one can account for non-specific uptake and retention of the targeted agent. Additionally, the signal from the control agent can be a normalizing factor to correct for tissue optical property differences. In this study, the kinetic model used for paired-agent imaging analysis (i.e., simplified reference tissue model) is modified and tested in simulation and experimental data in a way that accounts for the scaling correction within the kinetic model fit to the data to ultimately extract an estimate of the targeted biomarker concentration.
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-10-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-01-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638
Saha, Tanumoy; Rathmann, Isabel; Galic, Milos
2017-07-11
Filopodia are dynamic, finger-like cellular protrusions associated with migration and cell-cell communication. In order to better understand the complex signaling mechanisms underlying filopodial initiation, elongation and subsequent stabilization or retraction, it is crucial to determine the spatio-temporal protein activity in these dynamic structures. To analyze protein function in filopodia, we recently developed a semi-automated tracking algorithm that adapts to filopodial shape-changes, thus allowing parallel analysis of protrusion dynamics and relative protein concentration along the whole filopodial length. Here, we present a detailed step-by-step protocol for optimized cell handling, image acquisition and software analysis. We further provide instructions for the use of optional features during image analysis and data representation, as well as troubleshooting guidelines for all critical steps along the way. Finally, we also include a comparison of the described image analysis software with other programs available for filopodia quantification. Together, the presented protocol provides a framework for accurate analysis of protein dynamics in filopodial protrusions using image analysis software.
ERIC Educational Resources Information Center
Metcalf, Richard M.
Although there has been previous research concerned with image size, brightness, and contrast in projection standards, the work has lacked careful conceptualization. In this study, size was measured in terms of the visual angle subtended by the material, brightness was stated in foot-lamberts, and contrast was defined as the ratio of the…
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
NASA Astrophysics Data System (ADS)
Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.
2015-12-01
The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
Electrophoresis gel image processing and analysis using the KODAK 1D software.
Pizzonia, J
2001-06-01
The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.
Graph-based urban scene analysis using symbolic data
NASA Astrophysics Data System (ADS)
Moissinac, Henri; Maitre, Henri; Bloch, Isabelle
1995-07-01
A framework is presented for the interpretation of a urban landscape based on the analysis of aerial pictures. This method has been designed for the use of a priori knowledge provided by a geographic map in order to improve the image analysis stage. A coherent final interpretation of the studied area is proposed. It relies on a graph based data structure to modelize the urban landscape, and on a global uncertainty management to evaluate the final confidence we can have in the results presented. This structure and uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels.
Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan
2018-02-01
This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170
NASA Astrophysics Data System (ADS)
Perner, Petra
2017-03-01
Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.
Morphological feature extraction for the classification of digital images of cancerous tissues.
Thiran, J P; Macq, B
1996-10-01
This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.
Image processing and analysis of Saturn's rings
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Jepsen, P. L.; Garneau, G. W.; Mosher, J. A.; Doyle, L. R.; Lorre, J. J.; Avis, C. C.; Korsmo, E. P.
1981-01-01
Processing of Voyager image data of Saturn's rings at JPL's Image Processing Laboratory is described. A software system to navigate the flight images, facilitate feature tracking, and to project the rings has been developed. This system has been used to make measurements of ring radii and to measure the velocities of the spoke features in the B-Ring. A projected ring movie to study the development of these spoke features has been generated. Finally, processing to facilitate comparison of the photometric properties of Saturn's rings at various phase angles is described.
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Theory, Image Simulation, and Data Analysis of Chemical Release Experiments
NASA Technical Reports Server (NTRS)
Wescott, Eugene M.
1994-01-01
The final phase of Grant NAG6-1 involved analysis of physics of chemical releases in the upper atmosphere and analysis of data obtained on previous NASA sponsored chemical release rocket experiments. Several lines of investigation of past chemical release experiments and computer simulations have been proceeding in parallel. This report summarizes the work performed and the resulting publications. The following topics are addressed: analysis of the 1987 Greenland rocket experiments; calculation of emission rates for barium, strontium, and calcium; the CRIT 1 and 2 experiments (Collisional Ionization Cross Section experiments); image calibration using background stars; rapid ray motions in ionospheric plasma clouds; and the NOONCUSP rocket experiments.
Electron Microscopy and Image Analysis for Selected Materials
NASA Technical Reports Server (NTRS)
Williams, George
1999-01-01
This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
SE Great Basin Play Fairway Analysis
Adam Brandt
2015-11-15
Within this submission are multiple .tif images with accompanying metadata of magnetotelluric conductor occurrence, fault critical stress composite risk segment (CRS), permeability CRS, Quaternary mafic extrusions, Quaternary fault density, and Quaternary rhyolite maps. Each of these contributed to a final play fairway analysis (PFA) for the SE Great Basin study area.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
New development of the image matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Feng, Zhao
2018-04-01
To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Automated three-dimensional quantification of myocardial perfusion and brain SPECT.
Slomka, P J; Radau, P; Hurwitz, G A; Dey, D
2001-01-01
To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
New approach to gallbladder ultrasonic images analysis and lesions recognition.
Bodzioch, Sławomir; Ogiela, Marek R
2009-03-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
Development of Land Analysis System display modules
NASA Technical Reports Server (NTRS)
Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie
1986-01-01
The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.
Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B
2014-02-01
Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Ice Growth Measurements from Image Data to Support Ice Crystal and Mixed-Phase Accretion Testing
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Lynch, Christopher J.
2012-01-01
This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.
Ice Growth Measurements from Image Data to Support Ice-Crystal and Mixed-Phase Accretion Testing
NASA Technical Reports Server (NTRS)
Struk, Peter, M; Lynch, Christopher, J.
2012-01-01
This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.
Oregon Cascades Play Fairway Analysis: Raster Datasets and Models
Adam Brandt
2015-11-15
This submission includes maps of the spatial distribution of basaltic, and felsic rocks in the Oregon Cascades. It also includes a final Play Fairway Analysis (PFA) model, with the heat and permeability composite risk segments (CRS) supplied separately. Metadata for each raster dataset can be found within the zip files, in the TIF images
Guided filter and principal component analysis hybrid method for hyperspectral pansharpening
NASA Astrophysics Data System (ADS)
Qu, Jiahui; Li, Yunsong; Dong, Wenqian
2018-01-01
Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
Intensity-Based Registration for Lung Motion Estimation
NASA Astrophysics Data System (ADS)
Cao, Kunlin; Ding, Kai; Amelon, Ryan E.; Du, Kaifang; Reinhardt, Joseph M.; Raghavan, Madhavan L.; Christensen, Gary E.
Image registration plays an important role within pulmonary image analysis. The task of registration is to find the spatial mapping that brings two images into alignment. Registration algorithms designed for matching 4D lung scans or two 3D scans acquired at different inflation levels can catch the temporal changes in position and shape of the region of interest. Accurate registration is critical to post-analysis of lung mechanics and motion estimation. In this chapter, we discuss lung-specific adaptations of intensity-based registration methods for 3D/4D lung images and review approaches for assessing registration accuracy. Then we introduce methods for estimating tissue motion and studying lung mechanics. Finally, we discuss methods for assessing and quantifying specific volume change, specific ventilation, strain/ stretch information and lobar sliding.
NASA Astrophysics Data System (ADS)
Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes
2002-05-01
If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
Medicine: The final frontier in cancer diagnosis
NASA Astrophysics Data System (ADS)
Leachman, Sancy A.; Merlino, Glenn
2017-01-01
A computer, trained to classify skin cancers using image analysis alone, can now identify certain cancers as successfully as can skin-cancer doctors. What are the implications for the future of medical diagnosis? See Letter p.115
Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis
Hu, Mao-Gui; Wang, Jin-Feng; Ge, Yong
2009-01-01
Satellite remote sensing (RS) is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intra-urban). In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolution-enhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well in detail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics. PMID:22291530
Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E
2015-06-10
Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.
Implementation of the Pan-STARRS Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Fang, Julia; Aspin, C.
2007-12-01
Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.
Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features
NASA Astrophysics Data System (ADS)
Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija
2017-04-01
We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.
NASA Astrophysics Data System (ADS)
Georgiou, Harris
2009-10-01
Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji
2017-03-23
In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.
Failure prediction in ceramic composites using acoustic emission and digital image correlation
NASA Astrophysics Data System (ADS)
Whitlow, Travis; Jones, Eric; Przybyla, Craig
2016-02-01
The objective of the work performed here was to develop a methodology for linking in-situ detection of localized matrix cracking to the final failure location in continuous fiber reinforced CMCs. First, the initiation and growth of matrix cracking are measured and triangulated via acoustic emission (AE) detection. High amplitude events at relatively low static loads can be associated with initiation of large matrix cracks. When there is a localization of high amplitude events, a measurable effect on the strain field can be observed. Full field surface strain measurements were obtained using digital image correlation (DIC). An analysis using the combination of the AE and DIC data was able to predict the final failure location.
2004-02-04
KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-27
... considered, but were eliminated from detailed analysis include: conventional mining (whether by open pit or... Agencywide Documents and Management System (ADAMS), which provides text and image files of the NRC's public...
Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)
2001-01-01
The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.
Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao
2017-01-01
A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537
[Object Separation from Medical X-Ray Images Based on ICA].
Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun
2015-03-01
X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.
Raman Imaging in Cell Membranes, Lipid-Rich Organelles, and Lipid Bilayers.
Syed, Aleem; Smith, Emily A
2017-06-12
Raman-based optical imaging is a promising analytical tool for noninvasive, label-free chemical imaging of lipid bilayers and cellular membranes. Imaging using spontaneous Raman scattering suffers from a low intensity that hinders its use in some cellular applications. However, developments in coherent Raman imaging, surface-enhanced Raman imaging, and tip-enhanced Raman imaging have enabled video-rate imaging, excellent detection limits, and nanometer spatial resolution, respectively. After a brief introduction to these commonly used Raman imaging techniques for cell membrane studies, this review discusses selected applications of these modalities for chemical imaging of membrane proteins and lipids. Finally, recent developments in chemical tags for Raman imaging and their applications in the analysis of selected cell membrane components are summarized. Ongoing developments toward improving the temporal and spatial resolution of Raman imaging and small-molecule tags with strong Raman scattering cross sections continue to expand the utility of Raman imaging for diverse cell membrane studies.
Performance analysis of panoramic infrared systems
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Driggers, Ronald G.; Holst, Gerald; Krapels, Keith
2014-05-01
Panoramic imagers are becoming more commonplace in the visible part of the spectrum. These imagers are often used in the real estate market, extreme sports, teleconferencing, and security applications. Infrared panoramic imagers, on the other hand, are not as common and only a few have been demonstrated. A panoramic image can be formed in several ways, using pan and stitch, distributed aperture, or omnidirectional optics. When omnidirectional optics are used, the detected image is a warped view of the world that is mapped on the focal plane array in a donut shape. The final image on the display is the mapping of the omnidirectional donut shape image back to the panoramic world view. In this paper we analyze the performance of uncooled thermal panoramic imagers that use omnidirectional optics, focusing on range performance.
Lippolis, Giuseppe; Edsjö, Anders; Helczynski, Leszek; Bjartell, Anders; Overgaard, Niels Chr
2013-09-05
Prostate cancer is one of the leading causes of cancer related deaths. For diagnosis, predicting the outcome of the disease, and for assessing potential new biomarkers, pathologists and researchers routinely analyze histological samples. Morphological and molecular information may be integrated by aligning microscopic histological images in a multiplex fashion. This process is usually time-consuming and results in intra- and inter-user variability. The aim of this study is to investigate the feasibility of using modern image analysis methods for automated alignment of microscopic images from differently stained adjacent paraffin sections from prostatic tissue specimens. Tissue samples, obtained from biopsy or radical prostatectomy, were sectioned and stained with either hematoxylin & eosin (H&E), immunohistochemistry for p63 and AMACR or Time Resolved Fluorescence (TRF) for androgen receptor (AR). Image pairs were aligned allowing for translation, rotation and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale invariant image transform (SIFT), followed by the well-known RANSAC protocol for finding point correspondences and finally aligned by Procrustes fit. The Registration results were evaluated using both visual and quantitative criteria as defined in the text. Three experiments were carried out. First, images of consecutive tissue sections stained with H&E and p63/AMACR were successfully aligned in 85 of 88 cases (96.6%). The failures occurred in 3 out of 13 cores with highly aggressive cancer (Gleason score ≥ 8). Second, TRF and H&E image pairs were aligned correctly in 103 out of 106 cases (97%).The third experiment considered the alignment of image pairs with the same staining (H&E) coming from a stack of 4 sections. The success rate for alignment dropped from 93.8% in adjacent sections to 22% for sections furthest away. The proposed method is both reliable and fast and therefore well suited for automatic segmentation and analysis of specific areas of interest, combining morphological information with protein expression data from three consecutive tissue sections. Finally, the performance of the algorithm seems to be largely unaffected by the Gleason grade of the prostate tissue samples examined, at least up to Gleason score 7.
Vegetation survey in Amazonia using LANDSAT data. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Shimabukuro, Y. E.; Dossantos, J. R.; Deaquino, L. C. S.
1982-01-01
Automatic Image-100 analysis of LANDSAT data was performed using the MAXVER classification algorithm. In the pilot area, four vegetation units were mapped automatically in addition to the areas occupied for agricultural activities. The Image-100 classified results together with a soil map and information from RADAR images, permitted the establishment of the final legend with six classes: semi-deciduous tropical forest; low land evergreen tropical forest; secondary vegetation; tropical forest of humid areas, predominant pastureland and flood plains. Two water types were identified based on their sediments indicating different geological and geomorphological aspects.
Delage, B; Giroud, F; Monet, J D; Ekindjian, O G; Cals, M J
1999-06-01
Rheumatoid arthritic (RA) and osteoarthritic (OA) synovial cells in culture differ in their metabolic and proliferative behaviour. To assess links between these properties and nuclear changes, we used image analysis to study chromatin texture, together with nuclear morphometry and densitometry of OA and RA cells in primary culture. Chromatin pattern at the third day (D3) was heterogeneous and granular with chromatin clumps whereas at the final stage (D11) of culture a homogeneous and finely granular chromatin texture was observed. This evolution indicates global chromatin decondensation. These characteristics were more marked for RA than for OA nuclei. At each culture time, RA nuclei could be discriminated with high confidence from OA ones from parameters evaluating the organization of the chromatine texture. Nuclear image analysis is thus a useful tool for investigating synovial cell biology.
Corneal topography with high-speed swept source OCT in clinical examination
Karnowski, Karol; Kaluzny, Bartlomiej J.; Szkulmowski, Maciej; Gora, Michalina; Wojtkowski, Maciej
2011-01-01
We present the applicability of high-speed swept source (SS) optical coherence tomography (OCT) for quantitative evaluation of the corneal topography. A high-speed OCT device of 108,000 lines/s permits dense 3D imaging of the anterior segment within a time period of less than one fourth of second, minimizing the influence of motion artifacts on final images and topographic analysis. The swept laser performance was specially adapted to meet imaging depth requirements. For the first time to our knowledge the results of a quantitative corneal analysis based on SS OCT for clinical pathologies such as keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty are presented. Additionally, a comparison with widely used commercial systems, a Placido-based topographer and a Scheimpflug imaging-based topographer, is demonstrated. PMID:21991558
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue
2013-10-01
Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.
Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo
2010-01-01
Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.
Optical perception for detection of cutaneous T-cell lymphoma by multi-spectral imaging
NASA Astrophysics Data System (ADS)
Hsiao, Yu-Ping; Wang, Hsiang-Chen; Chen, Shih-Hua; Tsai, Chung-Hung; Yang, Jen-Hung
2014-12-01
In this study, the spectrum of each picture element of the patient’s skin image was obtained by multi-spectral imaging technology. Spectra of normal or pathological skin were collected from 15 patients. Principal component analysis and principal component scores of skin spectra were employed to distinguish the spectral characteristics with different diseases. Finally, skin regions with suspected cutaneous T-cell lymphoma (CTCL) lesions were successfully predicted by evaluation and classification of the spectra of pathological skin. The sensitivity and specificity of this technique were 89.65% and 95.18% after the analysis of about 109 patients. The probability of atopic dermatitis and psoriasis patients misinterpreted as CTCL were 5.56% and 4.54%, respectively.
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lance, C.; Eather, R.
1993-09-30
A low-light-level monochromatic imaging system was designed and fabricated which was optimized to detect and record optical emissions associated with high-power rf heating of the ionosphere. The instrument is capable of detecting very low intensities, of the order of 1 Rayleigh, from typical ionospheric atomic and molecular emissions. This is achieved through co-adding of ON images during heater pulses and subtraction of OFF (background) images between pulses. Images can be displayed and analyzed in real time and stored in optical disc for later analysis. Full image processing software is provided which was customized for this application and uses menu ormore » mouse user interaction.« less
Research on image complexity evaluation method based on color information
NASA Astrophysics Data System (ADS)
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.
Mahmood, Muhammad Tariq
2014-12-01
In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.
Optically gated beating-heart imaging
Taylor, Jonathan M.
2014-01-01
The constant motion of the beating heart presents an obstacle to clear optical imaging, especially 3D imaging, in small animals where direct optical imaging would otherwise be possible. Gating techniques exploit the periodic motion of the heart to computationally “freeze” this movement and overcome motion artifacts. Optically gated imaging represents a recent development of this, where image analysis is used to synchronize acquisition with the heartbeat in a completely non-invasive manner. This article will explain the concept of optical gating, discuss a range of different implementation strategies and their strengths and weaknesses. Finally we will illustrate the usefulness of the technique by discussing applications where optical gating has facilitated novel biological findings by allowing 3D in vivo imaging of cardiac myocytes in their natural environment of the beating heart. PMID:25566083
Optimization of a Biometric System Based on Acoustic Images
Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano
2014-01-01
On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
Stable image acquisition for mobile image processing applications
NASA Astrophysics Data System (ADS)
Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker
2015-02-01
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-03-01
Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.
NASA Astrophysics Data System (ADS)
Gao, Lingyu; Li, Xinghua; Guo, Qianrui; Quan, Jing; Hu, Zhengyue; Su, Zhikun; Zhang, Dong; Liu, Peilu; Li, Haopeng
2018-01-01
The internal structure of off-axis three-mirror system is commonly complex. The mirror installation error in assembly always affects the imaging line-of-sight and further degrades the image quality. Due to the complexity of the optical path in off-axis three-mirror optical system, the straightforward theoretical analysis on the variations of imaging line-of-sight is extremely difficult. In order to simplify the theoretical analysis, an equivalent single-mirror system is proposed and presented in this paper. In addition, the mathematical model of single-mirror system is established and the accurate expressions of imaging coordinate are derived. Utilizing the simulation software ZEMAX, off-axis three-mirror model and single-mirror model are both established. By adjusting the position of mirror and simulating the line-of-sight rotation of optical system, the variations of imaging coordinates are clearly observed. The final simulation results include: in off-axis three-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is approximately 30 um/″; in single-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is 31.5 um/″. Compared to the simulation results of the off-axis three-mirror model, the 5% relative error of single-mirror model analysis highly satisfies the requirement of equivalent analysis and also verifies its validity. This paper presents a new method to analyze the installation error of the mirror in the off-axis three-mirror system influencing on the imaging line-of-sight. Moreover, the off-axis three-mirror model is totally equivalent to the single-mirror model in theoretical analysis.
Quantum watermarking scheme through Arnold scrambling and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping
2017-09-01
Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.
A survey of MRI-based medical image analysis for brain tumor studies
NASA Astrophysics Data System (ADS)
Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio
2013-07-01
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.
Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A
2011-04-01
Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.
Petralia, Giuseppe; Musi, Gennaro; Padhani, Anwar R; Summers, Paul; Renne, Giuseppe; Alessi, Sarah; Raimondi, Sara; Matei, Deliu V; Renne, Salvatore L; Jereczek-Fossa, Barbara A; De Cobelli, Ottavio; Bellomi, Massimo
2015-02-01
To investigate whether use of multiparametric magnetic resonance (MR) imaging-directed intraoperative frozen-section (IFS) analysis during nerve-sparing robot-assisted radical prostatectomy reduces the rate of positive surgical margins. This retrospective analysis of prospectively acquired data was approved by an institutional ethics committee, and the requirement for informed consent was waived. Data were reviewed for 134 patients who underwent preoperative multiparametric MR imaging (T2 weighted, diffusion weighted, and dynamic contrast-material enhanced) and nerve-sparing robot-assisted radical prostatectomy, during which IFS analysis was used, and secondary resections were performed when IFS results were positive for cancer. Control patients (n = 134) matched for age, prostate-specific antigen level, and stage were selected from a pool of 322 patients who underwent nerve-sparing robot-assisted radical prostatectomy without multiparametric MR imaging and IFS analysis. Rates of positive surgical margins were compared by means of the McNemar test, and a multivariate conditional logistic regression model was used to estimate the odds ratio of positive surgical margins for patients who underwent MR imaging and IFS analysis compared with control subjects. Eighteen patients who underwent MR imaging and IFS analysis underwent secondary resections, and 13 of these patients were found to have negative surgical margins at final pathologic examination. Positive surgical margins were found less frequently in the patients who underwent MR imaging and IFS analysis than in control patients (7.5% vs 18.7%, P = .01). When the differences in risk factors are taken into account, patients who underwent MR imaging and IFS had one-seventh the risk of having positive surgical margins relative to control patients (adjusted odds ratio: 0.15; 95% confidence interval: 0.04, 0.61). The significantly lower rate of positive surgical margins compared with that in control patients provides preliminary evidence of the positive clinical effect of multiparametric MR imaging-directed IFS analysis for patients who undergo prostatectomy. © RSNA, 2014.
Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop
NASA Technical Reports Server (NTRS)
Vane, G. (Editor); Goetz, A. F. H. (Editor)
1985-01-01
The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.
Imbibition of wheat seeds: Application of image analysis
NASA Astrophysics Data System (ADS)
Lev, Jakub; Blahovec, Jiří
2017-10-01
Image analysis is widely used for monitoring seeds during germination, and it is often the final phase of germination that is subjected to the greatest attention. However, the initial phase of germination (the so-called imbibition) also exhibits interesting behaviour. This work shows that image analysis has significant potential in the imbibition. Herein, a total of 120 seeds were analysed during germination tests, and information about seed size and shape was stored and analysed. It was found that the imbibition can be divided into two newly defined parts. The first one (`abrupt imbibition') consists mainly of the swelling of the seed embryo part and lasts approximately one hour. The second one, referred to as `main imbibition', consists mainly of spatial expansion caused by imbibition in the other parts of the seed. The results presented are supported by the development of seed cross area and shape parameters, and by direct observation.
Sisco, Edward; Demoranville, Leonard T; Gillen, Greg
2013-09-10
The feasibility of using C60(+) cluster primary ion bombardment secondary ion mass spectrometry (C60(+) SIMS) for the analysis of the chemical composition of fingerprints is evaluated. It was found that C60(+) SIMS could be used to detect and image the spatial localization of a number of sebaceous and eccrine components in fingerprints. These analyses were also found to not be hindered by the use of common latent print powder development techniques. Finally, the ability to monitor the depth distribution of fingerprint constituents was found to be possible - a capability which has not been shown using other chemical imaging techniques. This paper illustrates a number of strengths and potential weaknesses of C60(+) SIMS as an additional or complimentary technique for the chemical analysis of fingerprints. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sparse models for correlative and integrative analysis of imaging and genetic data
Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.
2014-01-01
The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561
NASA Astrophysics Data System (ADS)
Li, Zhenjiang; Wang, Weilan
2018-04-01
Thangka is a treasure of Tibetan culture. In its digital protection, most of the current research focuses on the content of Thangka images, not the fabrication process. For silk embroidered Thangka of "Guo Tang", there are two craft methods, namely, weave embroidered and piles embroidered. The local texture of weave embroidered Thangka is rough, and that of piles embroidered Thangka is more smooth. In order to distinguish these two kinds of fabrication processes from images, a effectively segmentation algorithm of color blocks is designed firstly, and the obtained color blocks contain the local texture patterns of Thangka image; Secondly, the local texture features of the color block are extracted and screened; Finally, the selected features are analyzed experimentally. The experimental analysis shows that the proposed features can well reflect the difference between methods of weave embroidered and piles embroidered.
A versatile atomic force microscope integrated with a scanning electron microscope.
Kreith, J; Strunz, T; Fantner, E J; Fantner, G E; Cordill, M J
2017-05-01
A versatile atomic force microscope (AFM), which can be installed in a scanning electron microscope (SEM), is introduced. The flexible design of the instrument enables correlated analysis for different experimental configurations, such as AFM imaging directly after nanoindentation in vacuum. In order to demonstrate the capabilities of the specially designed AFM installed inside a SEM, slip steps emanating around nanoindents in single crystalline brass were examined. This example showcases how the combination of AFM and SEM imaging can be utilized for quantitative dislocation analysis through the measurement of the slip step heights without the hindrance of oxide formation. Finally, an in situ nanoindentation technique is introduced, illustrating the use of AFM imaging during indentation experiments to examine plastic deformation occurring under the indenter tip. The mechanical indentation data are correlated to the SEM and AFM images to estimate the number of dislocations emitted to the surface.
Intershot Analysis of Flows in DIII-D
NASA Astrophysics Data System (ADS)
Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.
2016-10-01
Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Effect of resin infiltration on white spot lesions after debonding orthodontic brackets.
Hammad, Shaza M; El Banna, Mai; El Zayat, Inas; Mohsen, Mohamed Abdel
2012-02-01
To evaluate the effect of application of a resin infiltration material on masking the white spot lesions (WSLs) after bracket removal. 18 patients participated in this study and were divided into two groups of nine patients each; by a visual score based on the extent of demineralization, according to the classification of the WSLs. Group 1: Visible WSLs without surface disruption and Group 2: WSLs showed a roughened surface but not requiring restoration. Three successive photographs were taken for every patient; immediately after bracket removal, 1 week after oral hygiene measures and after Icon material application. The JPEG images were imported into image analysis software (Image J version 1.33u for Windows XP, US National Institutes of Health) which presented the images into histograms of gray scale from (0 to 255). Initial and final images were compared for percentage of WSLs masking area. For both groups, a statistically significant difference at P<0.05 was obtained as follows; for WSLs in Group 1, the means at gray scale for the initial and the final photographs were 126.091 +/- 13.452 and 221.268 +/- 9.350 respectively and they revealed significance by Wilcoxon's signed rank test = 0.038, P<0.05. For WSLs in Group 2, the means at gray scale for the initial and the final photographs were 95.585 +/- 20.973 and 155.612 +/- 31.203 respectively and they revealed significance by Wilcoxon's signed rank test = 0.029, P<0.05.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
Yeh, Rong-Guan; Lin, Chung-Wu; Abbod, Maysam F.; Shieh, Jiann-Shing
2012-01-01
A detrended fluctuation analysis (DFA) method is applied to image analysis. The 2-dimensional (2D) DFA algorithms is proposed for recharacterizing images of lymph sections. Due to Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL), there is a significant different 5-year survival rates after multiagent chemotherapy. Therefore, distinguishing the difference between BL and DLBCL is very important. In this study, eighteen BL images were classified as group A, which have one to five cytogenetic changes. Ten BL images were classified as group B, which have more than five cytogenetic changes. Both groups A and B BLs are aggressive lymphomas, which grow very fast and require more intensive chemotherapy. Finally, ten DLBCL images were classified as group C. The short-term correlation exponent α1 values of DFA of groups A, B, and C were 0.370 ± 0.033, 0.382 ± 0.022, and 0.435 ± 0.053, respectively. It was found that α1 value of BL image was significantly lower (P < 0.05) than DLBCL. However, there is no difference between the groups A and B BLs. Hence, it can be concluded that α1 value based on DFA statistics concept can clearly distinguish BL and DLBCL image. PMID:23365623
NASA Astrophysics Data System (ADS)
Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.
2015-12-01
According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.
Zhang, Xiaolei; Liu, Fei; He, Yong; Li, Xiaoli
2012-01-01
Hyperspectral imaging in the visible and near infrared (VIS-NIR) region was used to develop a novel method for discriminating different varieties of commodity maize seeds. Firstly, hyperspectral images of 330 samples of six varieties of maize seeds were acquired using a hyperspectral imaging system in the 380–1,030 nm wavelength range. Secondly, principal component analysis (PCA) and kernel principal component analysis (KPCA) were used to explore the internal structure of the spectral data. Thirdly, three optimal wavelengths (523, 579 and 863 nm) were selected by implementing PCA directly on each image. Then four textural variables including contrast, homogeneity, energy and correlation were extracted from gray level co-occurrence matrix (GLCM) of each monochromatic image based on the optimal wavelengths. Finally, several models for maize seeds identification were established by least squares-support vector machine (LS-SVM) and back propagation neural network (BPNN) using four different combinations of principal components (PCs), kernel principal components (KPCs) and textural features as input variables, respectively. The recognition accuracy achieved in the PCA-GLCM-LS-SVM model (98.89%) was the most satisfactory one. We conclude that hyperspectral imaging combined with texture analysis can be implemented for fast classification of different varieties of maize seeds. PMID:23235456
2004-02-04
KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Behind him at right is Mike Rein, External Affairs division chief. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
BahadarKhan, Khan; A Khaliq, Amir; Shahid, Muhammad
2016-01-01
Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts. PMID:27441646
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Genotype-phenotype association study via new multi-task learning model
Huo, Zhouyuan; Shen, Dinggang
2018-01-01
Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Thiyagalingam, Jeyarajan; Goodman, Daniel; Schnabel, Julia A.; Trefethen, Anne; Grau, Vicente
2011-01-01
Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results. PMID:21869880
Final Report 2007: DOE-FG02-87ER60561
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilbourn, Michael R
2007-04-26
This project involved a multi-faceted approach to the improvement of techniques used in Positron Emission Tomography (PET), from radiochemistry to image processing and data analysis. New methods for radiochemical syntheses were examined, new radiochemicals prepared for evaluation and eventual use in human PET studies, and new pre-clinical methods examined for validation of biochemical parameters in animal studies. The value of small animal PET imaging in measuring small changes of in vivo biochemistry was examined and directly compared to traditional tissue sampling techniques. In human imaging studies, the ability to perform single experimental sessions utilizing two overlapping injections of radiopharmaceuticals wasmore » tested, and it was shown that valid biochemical measures for both radiotracers can be obtained through careful pharmacokinetic modeling of the PET emission data. Finally, improvements in reconstruction algorithms for PET data from small animal PET scanners was realized and these have been implemented in commercial releases. Together, the project represented an integrated effort to improve and extend all basic science aspects of PET imaging at both the animal and human level.« less
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
Characterization of fiber diameter using image analysis
NASA Astrophysics Data System (ADS)
Baheti, S.; Tunak, M.
2017-10-01
Due to high surface area and porosity, the applications of nanofibers have increased in recent years. In the production process, determination of average fiber diameter and fiber orientation is crucial for quality assessment. The objective of present study was to compare the relative performance of different methods discussed in literature for estimation of fiber diameter. In this work, the existing automated fiber diameter analysis software packages available in literature were developed and validated based on simulated images of known fiber diameter. Finally, all methods were compared for their reliable and accurate estimation of fiber diameter in electro spun nanofiber membranes based on obtained mean and standard deviation.
Two-dimensional PCA-based human gait identification
NASA Astrophysics Data System (ADS)
Chen, Jinyan; Wu, Rongteng
2012-11-01
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Improvement of automatic hemorrhage detection methods using brightness correction on fundus images
NASA Astrophysics Data System (ADS)
Hatanaka, Yuji; Nakagawa, Toshiaki; Hayashi, Yoshinori; Kakogawa, Masakatsu; Sawada, Akira; Kawase, Kazuhide; Hara, Takeshi; Fujita, Hiroshi
2008-03-01
We have been developing several automated methods for detecting abnormalities in fundus images. The purpose of this study is to improve our automated hemorrhage detection method to help diagnose diabetic retinopathy. We propose a new method for preprocessing and false positive elimination in the present study. The brightness of the fundus image was changed by the nonlinear curve with brightness values of the hue saturation value (HSV) space. In order to emphasize brown regions, gamma correction was performed on each red, green, and blue-bit image. Subsequently, the histograms of each red, blue, and blue-bit image were extended. After that, the hemorrhage candidates were detected. The brown regions indicated hemorrhages and blood vessels and their candidates were detected using density analysis. We removed the large candidates such as blood vessels. Finally, false positives were removed by using a 45-feature analysis. To evaluate the new method for the detection of hemorrhages, we examined 125 fundus images, including 35 images with hemorrhages and 90 normal images. The sensitivity and specificity for the detection of abnormal cases was were 80% and 88%, respectively. These results indicate that the new method may effectively improve the performance of our computer-aided diagnosis system for hemorrhages.
Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions.
Choi, Hongyoon
2018-04-01
Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
NASA Technical Reports Server (NTRS)
1972-01-01
This document is Volume 2 of three volumes of the Final Report for the four band Multispectral Scanner System (MSS). The results are contained of an analysis of pictures of actual outdoor scenes imaged by the engineering model MSS for spectral response, resolution, noise, and video correction. Also included are the results of engineering tests on the MSS for reflectance and saturation from clouds. Finally, two panoramic pictures of Yosemite National Park are provided.
Emerging imaging tools for use with traumatic brain injury research.
Hunter, Jill V; Wilde, Elisabeth A; Tong, Karen A; Holshouser, Barbara A
2012-03-01
This article identifies emerging neuroimaging measures considered by the inter-agency Pediatric Traumatic Brain Injury (TBI) Neuroimaging Workgroup. This article attempts to address some of the potential uses of more advanced forms of imaging in TBI as well as highlight some of the current considerations and unresolved challenges of using them. We summarize emerging elements likely to gain more widespread use in the coming years, because of 1) their utility in diagnosis, prognosis, and understanding the natural course of degeneration or recovery following TBI, and potential for evaluating treatment strategies; 2) the ability of many centers to acquire these data with scanners and equipment that are readily available in existing clinical and research settings; and 3) advances in software that provide more automated, readily available, and cost-effective analysis methods for large scale data image analysis. These include multi-slice CT, volumetric MRI analysis, susceptibility-weighted imaging (SWI), diffusion tensor imaging (DTI), magnetization transfer imaging (MTI), arterial spin tag labeling (ASL), functional MRI (fMRI), including resting state and connectivity MRI, MR spectroscopy (MRS), and hyperpolarization scanning. However, we also include brief introductions to other specialized forms of advanced imaging that currently do require specialized equipment, for example, single photon emission computed tomography (SPECT), positron emission tomography (PET), encephalography (EEG), and magnetoencephalography (MEG)/magnetic source imaging (MSI). Finally, we identify some of the challenges that users of the emerging imaging CDEs may wish to consider, including quality control, performing multi-site and longitudinal imaging studies, and MR scanning in infants and children.
Prasad, Keerthana; Winter, Jan; Bhat, Udayakrishna M; Acharya, Raviraja V; Prabhu, Gopalakrishna K
2012-08-01
This paper describes development of a decision support system for diagnosis of malaria using color image analysis. A hematologist has to study around 100 to 300 microscopic views of Giemsa-stained thin blood smear images to detect malaria parasites, evaluate the extent of infection and to identify the species of the parasite. The proposed algorithm picks up the suspicious regions and detects the parasites in images of all the views. The subimages representing all these parasites are put together to form a composite image which can be sent over a communication channel to obtain the opinion of a remote expert for accurate diagnosis and treatment. We demonstrate the use of the proposed technique for use as a decision support system by developing an android application which facilitates the communication with a remote expert for the final confirmation on the decision for treatment of malaria. Our algorithm detects around 96% of the parasites with a false positive rate of 20%. The Spearman correlation r was 0.88 with a confidence interval of 0.838 to 0.923, p<0.0001.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger
2015-04-01
The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.
NASA Astrophysics Data System (ADS)
Brun, F.; Intranuovo, F.; Mohammadi, S.; Domingos, M.; Favia, P.; Tromba, G.
2013-07-01
The technique used to produce a 3D tissue engineering (TE) scaffold is of fundamental importance in order to guarantee its proper morphological characteristics. An accurate assessment of the resulting structural properties is therefore crucial in order to evaluate the effectiveness of the produced scaffold. Synchrotron radiation (SR) computed microtomography (μ-CT) combined with further image analysis seems to be one of the most effective techniques to this aim. However, a quantitative assessment of the morphological parameters directly from the reconstructed images is a non trivial task. This study considers two different poly(ε-caprolactone) (PCL) scaffolds fabricated with a conventional technique (Solvent Casting Particulate Leaching, SCPL) and an additive manufacturing (AM) technique (BioCell Printing), respectively. With the first technique it is possible to produce scaffolds with random, non-regular, rounded pore geometry. The AM technique instead is able to produce scaffolds with square-shaped interconnected pores of regular dimension. Therefore, the final morphology of the AM scaffolds can be predicted and the resulting model can be used for the validation of the applied imaging and image analysis protocols. It is here reported a SR μ-CT image analysis approach that is able to effectively and accurately reveal the differences in the pore- and throat-size distributions as well as connectivity of both AM and SCPL scaffolds.
Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon
2018-01-01
This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A Bio Medical Waste Identification and Classification Algorithm Using Mltrp and Rvm.
Achuthan, Aravindan; Ayyallu Madangopal, Vasumathi
2016-10-01
We aimed to extract the histogram features for text analysis and, to classify the types of Bio Medical Waste (BMW) for garbage disposal and management. The given BMW was preprocessed by using the median filtering technique that efficiently reduced the noise in the image. After that, the histogram features of the filtered image were extracted with the help of proposed Modified Local Tetra Pattern (MLTrP) technique. Finally, the Relevance Vector Machine (RVM) was used to classify the BMW into human body parts, plastics, cotton and liquids. The BMW image was collected from the garbage image dataset for analysis. The performance of the proposed BMW identification and classification system was evaluated in terms of sensitivity, specificity, classification rate and accuracy with the help of MATLAB. When compared to the existing techniques, the proposed techniques provided the better results. This work proposes a new texture analysis and classification technique for BMW management and disposal. It can be used in many real time applications such as hospital and healthcare management systems for proper BMW disposal.
Redies, Christoph; Groß, Franziska
2013-01-01
Frames provide a visual link between artworks and their surround. We asked how image properties change as an observer zooms out from viewing a painting alone, to viewing the painting with its frame and, finally, the framed painting in its museum environment (museum scene). To address this question, we determined three higher-order image properties that are based on histograms of oriented luminance gradients. First, complexity was measured as the sum of the strengths of all gradients in the image. Second, we determined the self-similarity of histograms of the orientated gradients at different levels of spatial analysis. Third, we analyzed how much gradient strength varied across orientations (anisotropy). Results were obtained for three art museums that exhibited paintings from three major periods of Western art. In all three museums, the mean complexity of the frames was higher than that of the paintings or the museum scenes. Frames thus provide a barrier of complexity between the paintings and their exterior. By contrast, self-similarity and anisotropy values of images of framed paintings were intermediate between the images of the paintings and the museum scenes, i.e., the frames provided a transition between the paintings and their surround. We also observed differences between the three museums that may reflect modified frame usage in different art periods. For example, frames in the museum for 20th century art tended to be smaller and less complex than in the two other two museums that exhibit paintings from earlier art periods (13th–18th century and 19th century, respectively). Finally, we found that the three properties did not depend on the type of reproduction of the paintings (photographs in museums, scans from books or images from the Google Art Project). To the best of our knowledge, this study is the first to investigate the relation between frames and paintings by measuring physically defined, higher-order image properties. PMID:24265625
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ambekar Ramachandra Rao, Raghu; Mehta, Monal R.; Toussaint, Kimani C., Jr.
2010-02-01
We demonstrate the use of Fourier transform-second-harmonic generation (FT-SHG) imaging of collagen fibers as a means of performing quantitative analysis of obtained images of selected spatial regions in porcine trachea, ear, and cornea. Two quantitative markers, preferred orientation and maximum spatial frequency are proposed for differentiating structural information between various spatial regions of interest in the specimens. The ear shows consistent maximum spatial frequency and orientation as also observed in its real-space image. However, there are observable changes in the orientation and minimum feature size of fibers in the trachea indicating a more random organization. Finally, the analysis is applied to a 3D image stack of the cornea. It is shown that the standard deviation of the orientation is sensitive to the randomness in fiber orientation. Regions with variations in the maximum spatial frequency, but with relatively constant orientation, suggest that maximum spatial frequency is useful as an independent quantitative marker. We emphasize that FT-SHG is a simple, yet powerful, tool for extracting information from images that is not obvious in real space. This technique can be used as a quantitative biomarker to assess the structure of collagen fibers that may change due to damage from disease or physical injury.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Monte Carlo simulation of PET/MR scanner and assessment of motion correction strategies
NASA Astrophysics Data System (ADS)
Işın, A.; Uzun Ozsahin, D.; Dutta, J.; Haddani, S.; El-Fakhri, G.
2017-03-01
Positron Emission Tomography is widely used in three dimensional imaging of metabolic body function and in tumor detection. Important research efforts are made to improve this imaging modality and powerful simulators such as GATE are used to test and develop methods for this purpose. PET requires acquisition time in the order of few minutes. Therefore, because of the natural patient movements such as respiration, the image quality can be adversely affected which drives scientists to develop motion compensation methods to improve the image quality. The goal of this study is to evaluate various image reconstructions methods with GATE simulation of a PET acquisition of the torso area. Obtained results show the need to compensate natural respiratory movements in order to obtain an image with similar quality as the reference image. Improvements are still possible in the applied motion field's extraction algorithms. Finally a statistical analysis should confirm the obtained results.
Griffing, Lawrence R
2018-01-01
In this chapter, approaches to the image analysis of the choreography of the plant endoplasmic reticulum (ER) labeled with fluorescent fusion proteins ("stars," if you wish) are presented. The approaches include the analyses of those parts of the ER that are attached through membrane contact sites to moving or nonmoving partners (other "stars"). Image analysis is also used to understand the nature of the tubular polygonal network, the hallmark of this organelle, and how the polygons change over time due to tubule sliding or motion. Furthermore, the remodeling polygons of the ER interact with regions of fundamentally different topology, the ER cisternae, and image analysis can be used to separate the tubules from the cisternae. ER cisternae, like polygons and tubules, can be motile or stationary. To study which parts are attached to nonmoving partners, such as domains of the ER that form membrane contact sites with the plasma membrane/cell wall, an image analysis approach called persistency mapping has been used. To study the domains of the ER that are moving rapidly and streaming through the cell, the image analysis of optic flow has been used. However, optic flow approaches confuse the movement of the ER itself with the movement of proteins within the ER. As an overall measure of ER dynamics, optic flow approaches are of value, but their limitation as to what exactly is "flowing" needs to be specified. Finally, there are important imaging approaches that directly address the movement of fluorescent proteins within the ER lumen or in the membrane of the ER. Of these, fluorescence recovery after photobleaching (FRAP), inverse FRAP (iFRAP), and single particle tracking approaches are described.
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
Spectral imaging: principles and applications.
Garini, Yuval; Young, Ian T; McNamara, George
2006-08-01
Spectral imaging extends the capabilities of biological and clinical studies to simultaneously study multiple features such as organelles and proteins qualitatively and quantitatively. Spectral imaging combines two well-known scientific methodologies, namely spectroscopy and imaging, to provide a new advantageous tool. The need to measure the spectrum at each point of the image requires combining dispersive optics with the more common imaging equipment, and introduces constrains as well. The principles of spectral imaging and a few representative applications are described. Spectral imaging analysis is necessary because the complex data structure cannot be analyzed visually. A few of the algorithms are discussed with emphasis on the usage for different experimental modes (fluorescence and bright field). Finally, spectral imaging, like any method, should be evaluated in light of its advantages to specific applications, a selection of which is described. Spectral imaging is a relatively new technique and its full potential is yet to be exploited. Nevertheless, several applications have already shown its potential. (c) 2006 International Society for Analytical Cytology.
[Imaging Mass Spectrometry in Histopathologic Analysis].
Yamazaki, Fumiyoshi; Seto, Mitsutoshi
2015-04-01
Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling
2018-01-01
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
Effectiveness of different final irrigation protocols in removing debris in flattened root canals.
Nadalin, Michele Regina; Perez, Danyel Elias da Cruz; Vansan, Luiz Pascoal; Paschoala, Cristina; Souza-Neto, Manoel Damião; Saquy, Paulo César
2009-01-01
This study evaluated in vitro the capacity of debris removal from the apical third of flattened root canals, using different final irrigation protocols. Thirty human mandibular central incisors with a mesiodistal flattened root were prepared using rotary instrumentation by Endo-Flare 25.12 and Hero 642 30.06, 35.02, 40.02 files, irrigated with 2 mL of 1% NaOCl after each file. The specimens were randomly distributed into 5 groups according to the final irrigation of root canals: Group I: 10 mL of distilled water (control), Group II: 10 mL of 1% NaOCl for 8 min, Group III: 2 mL of 1% NaOCl for 2 min (repeated 4 times), Group IV: 10 mL of 2.5% NaOCl for 8 min, and Group V: 10 mL of 2.5% NaOCl for 2 min (repeated 4 times). The apical thirds of the specimens were subjected to histological processing and 6-microm cross-sections were obtained and stained with hematoxylin-eosin. The specimens were examined under optical microscopy at x40 magnification and the images were subjected to morphometric analysis using the Scion image-analysis software. The total area of root canal and the area with debris were measured in square millimeters. Analysis of variance showed no statistically significant difference (p>0.05) among the groups GI (2.39 +/- 3.59), GII (2.91 +/- 2.21), GIII (0.73 +/- 1.36), GIV (0.95 +/- 0.84) and GV (0.51 +/- 0.22). In conclusion, the final irrigation protocols evaluated in this study using the Luer syringe presented similar performance in the removal of debris from the apical third of flattened root canals.
Integrated thermal disturbance analysis of optical system of astronomical telescope
NASA Astrophysics Data System (ADS)
Yang, Dehua; Jiang, Zibo; Li, Xinnan
2008-07-01
During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
NASA Astrophysics Data System (ADS)
Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi
2005-04-01
In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.
Intermediate Templates Guided Groupwise Registration of Diffusion Tensor Images
Jia, Hongjun; Yap, Pew-Thian; Wu, Guorong; Wang, Qian; Shen, Dinggang
2010-01-01
Registration of a population of diffusion tensor images (DTIs) is one of the key steps in medical image analysis, and it plays an important role in the statistical analysis of white matter related neurological diseases. However, pairwise registration with respect to a pre-selected template may not give precise results if the selected template deviates significantly from the distribution of images. To cater for more accurate and consistent registration, a novel framework is proposed for groupwise registration with the guidance from one or more intermediate templates determined from the population of images. Specifically, we first use a Euclidean distance, defined as a combinative measure based on the FA map and ADC map, for gauging the similarity of each pair of DTIs. A fully connected graph is then built with each node denoting an image and each edge denoting the distance between a pair of images. The root template image is determined automatically as the image with the overall shortest path length to all other images on the minimum spanning tree (MST) of the graph. Finally, a sequence of registration steps is applied to progressively warping each image towards the root template image with the help of intermediate templates distributed along its path to the root node on the MST. Extensive experimental results using diffusion tensor images of real subjects indicate that registration accuracy and fiber tract alignment are significantly improved, compared with the direct registration from each image to the root template image. PMID:20851197
NASA Technical Reports Server (NTRS)
Garvin, J. B.; Mouginis-Mark, P. J.; Head, J. W.
1981-01-01
A data collection and analysis scheme developed for the interpretation of rock morphology from lander images is reviewed with emphasis on rock population characterization techniques. Data analysis techniques are also discussed in the context of identifying key characteristics of a rock that place it in a single category with similar rocks. Actual rock characteristics observed from Viking and Venera lander imagery are summarized. Finally, some speculations regarding the block fields on Mars and Venus are presented.
CT scan range estimation using multiple body parts detection: let PACS learn the CT image content.
Wang, Chunliang; Lundström, Claes
2016-02-01
The aim of this study was to develop an efficient CT scan range estimation method that is based on the analysis of image data itself instead of metadata analysis. This makes it possible to quantitatively compare the scan range of two studies. In our study, 3D stacks are first projected to 2D coronal images via a ray casting-like process. Trained 2D body part classifiers are then used to recognize different body parts in the projected image. The detected candidate regions go into a structure grouping process to eliminate false-positive detections. Finally, the scale and position of the patient relative to the projected figure are estimated based on the detected body parts via a structural voting. The start and end lines of the CT scan are projected to a standard human figure. The position readout is normalized so that the bottom of the feet represents 0.0, and the top of the head is 1.0. Classifiers for 18 body parts were trained using 184 CT scans. The final application was tested on 136 randomly selected heterogeneous CT scans. Ground truth was generated by asking two human observers to mark the start and end positions of each scan on the standard human figure. When compared with the human observers, the mean absolute error of the proposed method is 1.2% (max: 3.5%) and 1.6% (max: 5.4%) for the start and end positions, respectively. We proposed a scan range estimation method using multiple body parts detection and relative structure position analysis. In our preliminary tests, the proposed method delivered promising results.
Quantum image encryption based on restricted geometric and color transformations
NASA Astrophysics Data System (ADS)
Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu
2014-08-01
A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.
Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing
NASA Technical Reports Server (NTRS)
Schowengerdt, R.; Gray, S.; Park, S. K.
1984-01-01
Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.
Image analysis by integration of disparate information
NASA Technical Reports Server (NTRS)
Lemoigne, Jacqueline
1993-01-01
Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.
Medical imaging: examples of clinical applications
NASA Astrophysics Data System (ADS)
Meinzer, H. P.; Thorn, M.; Vetter, M.; Hassenpflug, P.; Hastenteufel, M.; Wolf, I.
Clinical routine is currently producing a multitude of diagnostic digital images but only a few are used in therapy planning and treatment. Medical imaging is involved in both diagnosis and therapy. Using a computer, existing 2D images can be transformed into interactive 3D volumes and results from different modalities can be merged. Furthermore, it is possible to calculate functional areas that were not visible in the primary images. This paper presents examples of clinical applications that are integrated into clinical routine and are based on medical imaging fundamentals. In liver surgery, the importance of virtual planning is increasing because surgery is still the only possible curative procedure. Visualisation and analysis of heart defects are also gaining in significance due to improved surgery techniques. Finally, an outlook is provided on future developments in medical imaging using navigation to support the surgeon's work. The paper intends to give an impression of the wide range of medical imaging that goes beyond the mere calculation of medical images.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H
2016-08-01
Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Research on Wide-field Imaging Technologies for Low-frequency Radio Array
NASA Astrophysics Data System (ADS)
Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.
2017-09-01
Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.
Emerging Imaging Tools for Use with Traumatic Brain Injury Research
Wilde, Elisabeth A.; Tong, Karen A.; Holshouser, Barbara A.
2012-01-01
Abstract This article identifies emerging neuroimaging measures considered by the inter-agency Pediatric Traumatic Brain Injury (TBI) Neuroimaging Workgroup. This article attempts to address some of the potential uses of more advanced forms of imaging in TBI as well as highlight some of the current considerations and unresolved challenges of using them. We summarize emerging elements likely to gain more widespread use in the coming years, because of 1) their utility in diagnosis, prognosis, and understanding the natural course of degeneration or recovery following TBI, and potential for evaluating treatment strategies; 2) the ability of many centers to acquire these data with scanners and equipment that are readily available in existing clinical and research settings; and 3) advances in software that provide more automated, readily available, and cost-effective analysis methods for large scale data image analysis. These include multi-slice CT, volumetric MRI analysis, susceptibility-weighted imaging (SWI), diffusion tensor imaging (DTI), magnetization transfer imaging (MTI), arterial spin tag labeling (ASL), functional MRI (fMRI), including resting state and connectivity MRI, MR spectroscopy (MRS), and hyperpolarization scanning. However, we also include brief introductions to other specialized forms of advanced imaging that currently do require specialized equipment, for example, single photon emission computed tomography (SPECT), positron emission tomography (PET), encephalography (EEG), and magnetoencephalography (MEG)/magnetic source imaging (MSI). Finally, we identify some of the challenges that users of the emerging imaging CDEs may wish to consider, including quality control, performing multi-site and longitudinal imaging studies, and MR scanning in infants and children. PMID:21787167
Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue
NASA Astrophysics Data System (ADS)
Busch, David Richard, Jr.
Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-01-01
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146
A novel approach to segmentation and measurement of medical image using level set methods.
Chen, Yao-Tien
2017-06-01
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome
O’Connor, James P.B.; Rose, Chris J.; Waterton, John C.; Carano, Richard A.D.; Parker, Geoff J.M.; Jackson, Alan
2014-01-01
Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. PMID:25421725
THE TIME EVOLUTION OF HH 1 FROM FOUR EPOCHS OF HST IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raga, A. C.; Esquivel, A.; Reipurth, B.
We present an analysis of four epochs of Hα and [S ii] λλ 6716/6731 Hubble Space Telescope (HST) images of HH 1. For determining proper motions, we explore a new method based on the analysis of spatially degraded images obtained convolving the images with wavelet functions of chosen widths. With this procedure, we are able to generate maps of proper motion velocities along and across the outflow axis, as well as (angularly integrated) proper motion velocity distributions. From the four available epochs, we find the time evolution of the velocities, intensities, and spatial distribution of the line emission. We find that overmore » the last two decades HH 1 shows a clear acceleration. Also, the Hα and [S ii] intensities first dropped and then recovered in the more recent (2014) images. Finally, we show a comparison between the two available HST epochs of [O iii] λ 5007 (1994 and 2014), in which we see a clear drop in the value of the [O iii]/Hα ratio.« less
Imaging intratumor heterogeneity: role in therapy response, resistance, and clinical outcome.
O'Connor, James P B; Rose, Chris J; Waterton, John C; Carano, Richard A D; Parker, Geoff J M; Jackson, Alan
2015-01-15
Tumors exhibit genomic and phenotypic heterogeneity, which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as CT density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death, and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks using PET, MRI, and other emerging molecular imaging techniques. These methods can establish whether one tumor is more or less heterogeneous than another and can identify subregions with differing biology. In this article, we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. ©2014 American Association for Cancer Research.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
NASA Astrophysics Data System (ADS)
Sukawattanavijit, Chanika; Srestasathiern, Panu
2017-10-01
Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.
Thomas, Anchu Rachel; Velmurugan, Natanasabapathy; Smita, Surendran; Jothilatha, Sundaramurthy
2014-10-01
The purpose of this study was to evaluate the canal isthmus debridement efficacy of a new modified EndoVac (Discus Dental, Culver City, CA) irrigation protocol in comparison with EndoVac, passive ultrasonic irrigation (PUI), and conventional needle irrigation in mesial roots of mandibular molars. The mesial roots of 64 extracted mandibular molars mounted in resin using Kuttler's endodontic cube, sectioned at 2 and 4 mm from the working length, were randomly divided into 4 groups (n = 16): group 1: Max-I-Probe (Dentsply Tulsa Dental, York, PA), group 2: EndoVac (EVI), group 3: modified EndoVac, and group 4: PUI. The specimens were reassembled and instrumented. A standard irrigation protocol was used during cleaning and shaping and final irrigation with the 4 irrigation/agitation techniques. Images of the isthmus region were taken before and after cleaning and shaping and after final irrigation. The percentage reduction of debris in the isthmus region was calculated by using the software program Image J (v1.43; National Institutes of Health, Bethesda, MD). Intergroup analysis was performed using the Kruskal Wallis and Mann-Whitney U tests. Intragroup analysis was performed using Friedman and Wilcoxon signed rank tests. The level of significance was set at P < .05. Intragroup analysis revealed a statistically significant difference in the percentage reduction of debris after cleaning and shaping and after final irrigation protocol in all the groups (P < .001). The final irrigation protocol produced significantly cleaner canal isthmuses in all the groups (P < .001). On intergroup analysis, the modified EVI group performed significantly better than the other groups. The EVI and PUI groups performed better than the Max-I-Probe group. There was no statistical significance between the EVI and PUI groups. Canal isthmuses were significantly cleaner with the modified EndoVac irrigation technique when compared with the cleanliness seen with the other irrigation systems. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Thalgott, Mark; Düwel, Charlotte; Rauscher, Isabel; Heck, Matthias M; Haller, Bernhard; Gafita, Andrei; Gschwend, Jürgen E; Schwaiger, Markus; Maurer, Tobias; Eiber, Matthias
2018-05-24
Our aim was to assess the diagnostic potential of one-stop shop Prostate-specific membrane antigen-ligand Positron Emission Tomography/Magnetic Resonance Imaging ( 68 Ga-PSMA-11 PET/MRI) compared to preoperative staging nomograms in patients with high-risk prostate cancer (PC). Methods: A total of 102 patients underwent 68 Ga-PSMA-11 PET/MRI before intended radical prostatectomy (RP) with lymph node dissection. Preoperative variables determined the probabilities for lymph node metastases (LNM), extracapsular extension (ECE) and seminal vesical involvement (SVI) using the Memorial Sloan-Kettering Cancer Center (MSKCC) nomogram and Partin tables. Receiver operating characteristic (ROC) analyses were performed to determine best discriminatory cutoffs. On cohort base, positivity rates of imaging and nomograms were compared to pathological prevalence. On patient base, sensitivity, specificity and its area under the curves (AUCs) were calculated. Finally, the full concordance of each method to postoperative T- and N-stage was determined. Results: 73 patients were finally analysed. On cohort base, the MSKCC nomogram (39.7%) positivity rate was most concordant with pathological prevalence for LNM (34.3%) compared to Partin tables (14.1%) and imaging (20.6). Prevalence of ECE (72.6%) was best predicted by MSKCC nomograms and imaging (83.6% each), compared to Partin tables (38.4%). For prevalence of SVI (45.2%), imaging (47.9%) performed superior to MSKCC (37.6%) and Partin tables (19.3%). On patient base, AUCs for LNM, ECE and SVI did not differ significantly between tests (p>0.05). Imaging revealed a high specificity (100%) for LNM and a sensitivity (60%) comparable to the MSKCC nomogram (68%) and Partin tables (60%). For ECE, imaging revealed the highest sensitivity (94.3%) compared to the MSKCC nomogram (66%) and Partin tables (71.1%). For SVI, sensitivity and specificity of imaging and MSKCC nomogram were comparable (81.5% and 80% vs. 87.9% and 75%). The rate of concordance to the final pTN-stage was 60.3% for imaging, 52.1% for the MSKCC nomogram and 39.7% for Partin tables. Conclusion: In our analysis, preoperative one-stop shop 68 Ga-PSMA-11 PET/MRI performs at least equally for T- and N-stage prediction compared to nomograms in high-risk PC patients. Despite, an improved prediction of the full final stage and the yield of additional anatomical information, the use of 68 Ga-PSMA-11 PET/MRI warrants further prospective evaluation. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-28
... Uranium Recovery Project, located in the Pumpkin Buttes Uranium Mining District within the Powder River.... Alternatives that were considered, but were eliminated from detailed analysis, include conventional mining and... an Agencywide Documents and Management System (ADAMS), which provides text and image files of the NRC...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korte, Andrew R
This thesis presents efforts to improve the methodology of matrix-assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) as a method for analysis of metabolites from plant tissue samples. The first chapter consists of a general introduction to the technique of MALDI-MSI, and the sixth and final chapter provides a brief summary and an outlook on future work.
On board processor development for NASA's spaceborne imaging radar with system-on-chip technology
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
2004-01-01
This paper reports a preliminary study result of an on-board spaceborne SAR processor. It consists of a processing requirement analysis, functional specifications, and implementation with system-on-chip technology. Finally, a minimum version of this on-board processor designed for performance evaluation and for partial demonstration is illustrated.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
NASA Astrophysics Data System (ADS)
Sebesta, Mikael; Egelberg, Peter J.; Langberg, Anders; Lindskov, Jens-Henrik; Alm, Kersti; Janicke, Birgit
2016-03-01
Live-cell imaging enables studying dynamic cellular processes that cannot be visualized in fixed-cell assays. An increasing number of scientists in academia and the pharmaceutical industry are choosing live-cell analysis over or in addition to traditional fixed-cell assays. We have developed a time-lapse label-free imaging cytometer HoloMonitorM4. HoloMonitor M4 assists researchers to overcome inherent disadvantages of fluorescent analysis, specifically effects of chemical labels or genetic modifications which can alter cellular behavior. Additionally, label-free analysis is simple and eliminates the costs associated with staining procedures. The underlying technology principle is based on digital off-axis holography. While multiple alternatives exist for this type of analysis, we prioritized our developments to achieve the following: a) All-inclusive system - hardware and sophisticated cytometric analysis software; b) Ease of use enabling utilization of instrumentation by expert- and entrylevel researchers alike; c) Validated quantitative assay end-points tracked over time such as optical path length shift, optical volume and multiple derived imaging parameters; d) Reliable digital autofocus; e) Robust long-term operation in the incubator environment; f) High throughput and walk-away capability; and finally g) Data management suitable for single- and multi-user networks. We provide examples of HoloMonitor applications of label-free cell viability measurements and monitoring of cell cycle phase distribution.
Automatic pelvis segmentation from x-ray images of a mouse model
NASA Astrophysics Data System (ADS)
Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham
2017-05-01
The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Duque, Duarte; Granja, Sara; Correia-Pinto, Jorge; Vilaça, João L.
2014-01-01
Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers. PMID:25250057
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
The algorithm of motion blur image restoration based on PSF half-blind estimation
NASA Astrophysics Data System (ADS)
Chen, Da-Ke; Lin, Zhe
2011-08-01
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Murat, Sema; Kamburoğlu, Kıvanç; Kılıç, Cenk; Ozen, Tuncer; Gurbuz, Ayhan
2014-02-01
The present study compared the use of cone beam computerized tomography (CBCT) images and intra-oral radiographs in the placement of final implant drills in terms of nerve damage to cadaver mandibles. Twelve cadaver hemimandibles obtained from 6 cadavers were used. Right hemimandibles were imaged using peri-apical radiography and left hemimandibles using CBCT, and the images obtained were used in treatment planning for the placement of implant drills (22 for each modality, for a total of 44 final drills). Specimens were dissected, and the distances between the apex of the final implant drill and the inferior alveolar neurovascular bundle and incisive nerve were measured using a digital calliper. Nerves were assessed as damaged or not damaged, and the Chi-square test was used to compare nerve damage between modalities (P < 0.05). Nerve damage occurred with 7 final drills placed based on peri-apical radiography (31.8%) and 1 final drill placed using CBCT images (4.5%). The difference in nerve damage between imaging modalities was statistically significant (P = 0.023), with CBCT outperforming intraoral film in the placement of final implant drills ex vivo. In order to prevent nerve damage, CBCT is recommended as the principal imaging modality for pre-implant assessment.
Pohjonen, Hanna; Ross, Peeter; Blickman, Johan G; Kamman, Richard
2007-01-01
Emerging technologies are transforming the workflows in healthcare enterprises. Computing grids and handheld mobile/wireless devices are providing clinicians with enterprise-wide access to all patient data and analysis tools on a pervasive basis. In this paper, emerging technologies are presented that provide computing grids and streaming-based access to image and data management functions, and system architectures that enable pervasive computing on a cost-effective basis. Finally, the implications of such technologies are investigated regarding the positive impacts on clinical workflows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudo, Takuya; Inoue, Tomoya; Kita, Takashi
2008-10-01
Self-assembling process of InAs/GaAs quantum dots has been investigated by analyzing reflection high-energy electron diffraction chevron images reflecting the crystal facet structure surrounding the island. The chevron image shows dramatic changes during the island formation. From the temporal evolution of the chevron tail structure, the self-assembling process has been found to consist of four steps. The initial islands do not show distinct facet structures. Then, the island surface is covered by high-index facets, and this is followed by the formation of stable low-index facets. Finally, the flow of In atoms from the islands occurs, which contributes to flatten the wettingmore » layer. Furthermore, we have investigated the island shape evolution during the GaAs capping layer growth by using the same real-time analysis technique.« less
High resolution earth observation from geostationary orbit by optical aperture synthesys
NASA Astrophysics Data System (ADS)
Mesrine, M.; Thomas, E.; Garin, S.; Blanc, P.; Alis, C.; Cassaing, F.; Laubier, D.
2017-11-01
In this paper, we describe Optical Aperture Synthesis (OAS) imaging instrument concepts studied by Alcatel Alenia Space under a CNES R&T contract in term of technical feasibility. First, the methodology to select the aperture configuration is proposed, based on the definition and quantification of image quality criteria adapted to an OAS instrument for direct imaging of extended objects. The following section presents, for each interferometer type (Michelson and Fizeau), the corresponding optical configurations compatible with a large field of view from GEO orbit. These optical concepts take into account the constraints imposed by the foreseen resolution and the implementation of the co-phasing functions. The fourth section is dedicated to the analysis of the co-phasing methodologies, from the configuration deployment to the fine stabilization during observation. Finally, we present a trade-off analysis allowing to select the concept wrt mission specification and constraints related to instrument accommodation under launcher shroud and in-orbit deployment.
Paparelli, Laura; Corthout, Nikky; Pavie, Benjamin; Annaert, Wim; Munck, Sebastian
2016-01-01
The spatial distribution of proteins within the cell affects their capability to interact with other molecules and directly influences cellular processes and signaling. At the plasma membrane, multiple factors drive protein compartmentalization into specialized functional domains, leading to the formation of clusters in which intermolecule interactions are facilitated. Therefore, quantifying protein distributions is a necessity for understanding their regulation and function. The recent advent of super-resolution microscopy has opened up the possibility of imaging protein distributions at the nanometer scale. In parallel, new spatial analysis methods have been developed to quantify distribution patterns in super-resolution images. In this chapter, we provide an overview of super-resolution microscopy and summarize the factors influencing protein arrangements on the plasma membrane. Finally, we highlight methods for analyzing clusterization of plasma membrane proteins, including examples of their applications.
An excitation wavelength-scanning spectral imaging system for preclinical imaging
NASA Astrophysics Data System (ADS)
Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Rajwa, Bartek; Robinson, J. Paul
2008-02-01
Small-animal fluorescence imaging is a rapidly growing field, driven by applications in cancer detection and pharmaceutical therapies. However, the practical use of this imaging technology is limited by image-quality issues related to autofluorescence background from animal tissues, as well as attenuation of the fluorescence signal due to scatter and absorption. To combat these problems, spectral imaging and analysis techniques are being employed to separate the fluorescence signal from background autofluorescence. To date, these technologies have focused on detecting the fluorescence emission spectrum at a fixed excitation wavelength. We present an alternative to this technique, an imaging spectrometer that detects the fluorescence excitation spectrum at a fixed emission wavelength. The advantages of this approach include increased available information for discrimination of fluorescent dyes, decreased optical radiation dose to the animal, and ability to scan a continuous wavelength range instead of discrete wavelength sampling. This excitation-scanning imager utilizes an acousto-optic tunable filter (AOTF), with supporting optics, to scan the excitation spectrum. Advanced image acquisition and analysis software has also been developed for classification and unmixing of the spectral image sets. Filtering has been implemented in a single-pass configuration with a bandwidth (full width at half maximum) of 16nm at 550nm central diffracted wavelength. We have characterized AOTF filtering over a wide range of incident light angles, much wider than has been previously reported in the literature, and we show how changes in incident light angle can be used to attenuate AOTF side lobes and alter bandwidth. A new parameter, in-band to out-of-band ratio, was defined to assess the quality of the filtered excitation light. Additional parameters were measured to allow objective characterization of the AOTF and the imager as a whole. This is necessary for comparing the excitation-scanning imager to other spectral and fluorescence imaging technologies. The effectiveness of the hyperspectral imager was tested by imaging and analysis of mice with injected fluorescent dyes. Finally, a discussion of the optimization of spectral fluorescence imagers is given, relating the effects of filter quality on fluorescence images collected and the analysis outcome.
Analysing magnetism using scanning SQUID microscopy.
Reith, P; Renshaw Wang, X; Hilgenkamp, H
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
Analysing magnetism using scanning SQUID microscopy
NASA Astrophysics Data System (ADS)
Reith, P.; Renshaw Wang, X.; Hilgenkamp, H.
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
An efficient method for automatic morphological abnormality detection from human sperm images.
Ghasemian, Fatemeh; Mirroshandel, Seyed Abolghasem; Monji-Azad, Sara; Azarnia, Mahnaz; Zahiri, Ziba
2015-12-01
Sperm morphology analysis (SMA) is an important factor in the diagnosis of human male infertility. This study presents an automatic algorithm for sperm morphology analysis (to detect malformation) using images of human sperm cells. The SMA method was used to detect and analyze different parts of the human sperm. First of all, SMA removes the image noises and enhances the contrast of the image to a great extent. Then it recognizes the different parts of sperm (e.g., head, tail) and analyzes the size and shape of each part. Finally, the algorithm classifies each sperm as normal or abnormal. Malformations in the head, midpiece, and tail of a sperm, can be detected by the SMA method. In contrast to other similar methods, the SMA method can work with low resolution and non-stained images. Furthermore, an image collection created for the SMA, has also been described in this study. This benchmark consists of 1457 sperm images from 235 patients, and is known as human sperm morphology analysis dataset (HSMA-DS). The proposed algorithm was tested on HSMA-DS. The experimental results show the high ability of SMA to detect morphological deformities from sperm images. In this study, the SMA algorithm produced above 90% accuracy in sperm abnormality detection task. Another advantage of the proposed method is its low computation time (that is, less than 9s), as such, the expert can quickly decide to choose the analyzed sperm or select another one. Automatic and fast analysis of human sperm morphology can be useful during intracytoplasmic sperm injection for helping embryologists to select the best sperm in real time. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
New Trends of Emerging Technologies in Digital Pathology.
Bueno, Gloria; Fernández-Carrobles, M Milagro; Deniz, Oscar; García-Rojo, Marcial
2016-01-01
The future paradigm of pathology will be digital. Instead of conventional microscopy, a pathologist will perform a diagnosis through interacting with images on computer screens and performing quantitative analysis. The fourth generation of virtual slide telepathology systems, so-called virtual microscopy and whole-slide imaging (WSI), has allowed for the storage and fast dissemination of image data in pathology and other biomedical areas. These novel digital imaging modalities encompass high-resolution scanning of tissue slides and derived technologies, including automatic digitization and computational processing of whole microscopic slides. Moreover, automated image analysis with WSI can extract specific diagnostic features of diseases and quantify individual components of these features to support diagnoses and provide informative clinical measures of disease. Therefore, the challenge is to apply information technology and image analysis methods to exploit the new and emerging digital pathology technologies effectively in order to process and model all the data and information contained in WSI. The final objective is to support the complex workflow from specimen receipt to anatomic pathology report transmission, that is, to improve diagnosis both in terms of pathologists' efficiency and with new information. This article reviews the main concerns about and novel methods of digital pathology discussed at the latest workshop in the field carried out within the European project AIDPATH (Academia and Industry Collaboration for Digital Pathology). © 2016 S. Karger AG, Basel.
Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.
2001-01-01
During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.
Kinoshita, Manabu; Sakai, Mio; Arita, Hideyuki; Shofuda, Tomoko; Chiba, Yasuyoshi; Kagawa, Naoki; Watanabe, Yoshiyuki; Hashimoto, Naoya; Fujimoto, Yasunori; Yoshimine, Toshiki; Nakanishi, Katsuyuki; Kanemura, Yonehiro
2016-01-01
Reports have suggested that tumor textures presented on T2-weighted images correlate with the genetic status of glioma. Therefore, development of an image analyzing framework that is capable of objective and high throughput image texture analysis for large scale image data collection is needed. The current study aimed to address the development of such a framework by introducing two novel parameters for image textures on T2-weighted images, i.e., Shannon entropy and Prewitt filtering. Twenty-two WHO grade 2 and 28 grade 3 glioma patients were collected whose pre-surgical MRI and IDH1 mutation status were available. Heterogeneous lesions showed statistically higher Shannon entropy than homogenous lesions (p = 0.006) and ROC curve analysis proved that Shannon entropy on T2WI was a reliable indicator for discrimination of homogenous and heterogeneous lesions (p = 0.015, AUC = 0.73). Lesions with well-defined borders exhibited statistically higher Edge mean and Edge median values using Prewitt filtering than those with vague lesion borders (p = 0.0003 and p = 0.0005 respectively). ROC curve analysis also proved that both Edge mean and median values were promising indicators for discrimination of lesions with vague and well defined borders and both Edge mean and median values performed in a comparable manner (p = 0.0002, AUC = 0.81 and p < 0.0001, AUC = 0.83, respectively). Finally, IDH1 wild type gliomas showed statistically lower Shannon entropy on T2WI than IDH1 mutated gliomas (p = 0.007) but no difference was observed between IDH1 wild type and mutated gliomas in Edge median values using Prewitt filtering. The current study introduced two image metrics that reflect lesion texture described on T2WI. These two metrics were validated by readings of a neuro-radiologist who was blinded to the results. This observation will facilitate further use of this technique in future large scale image analysis of glioma.
Retina vascular network recognition
NASA Astrophysics Data System (ADS)
Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo
1993-09-01
The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.
A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork.
Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen
2018-04-01
This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.
A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork
Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen
2018-01-01
Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.
NASA Astrophysics Data System (ADS)
Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.
2004-05-01
Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.
Hemorrhage detection in MRI brain images using images features
NASA Astrophysics Data System (ADS)
Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela
2013-11-01
The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Stripe nonuniformity correction for infrared imaging system based on single image optimization
NASA Astrophysics Data System (ADS)
Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai
2018-06-01
Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.
Preparing Colorful Astronomical Images III: Cosmetic Cleaning
NASA Astrophysics Data System (ADS)
Frattare, L. M.; Levay, Z. G.
2003-12-01
We present cosmetic cleaning techniques for use with mainstream graphics software (Adobe Photoshop) to produce presentation-quality images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope when producing photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to discuss the treatment of various detector-attributed artifacts such as cosmic rays, chip seams, gaps, optical ghosts, diffraction spikes and the like. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to final presentation images. Other pixel-to-pixel applications such as filter smoothing and global noise reduction will be discussed.
Diffraction imaging for in situ characterization of double-crystal X-ray monochromators
Stoupin, Stanislav; Liu, Zunping; Heald, Steve M.; ...
2015-10-30
In this paper, imaging of the Bragg-reflected X-ray beam is proposed and validated as an in situ method for characterization of the performance of double-crystal monochromators under the heat load of intense synchrotron radiation. A sequence of images is collected at different angular positions on the reflectivity curve of the second crystal and analyzed. The method provides rapid evaluation of the wavefront of the exit beam, which relates to local misorientation of the crystal planes along the beam footprint on the thermally distorted first crystal. The measured misorientation can be directly compared with the results of finite element analysis. Finally,more » the imaging method offers an additional insight into the local intrinsic crystal quality over the footprint of the incident X-ray beam.« less
Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding.
Zhang, Xuncai; Han, Feng; Niu, Ying
2017-01-01
With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis.
Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding
2017-01-01
With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis. PMID:28912802
Meeting Report: Tissue-based Image Analysis.
Saravanan, Chandra; Schumacher, Vanessa; Brown, Danielle; Dunstan, Robert; Galarneau, Jean-Rene; Odin, Marielle; Mishra, Sasmita
2017-10-01
Quantitative image analysis (IA) is a rapidly evolving area of digital pathology. Although not a new concept, the quantification of histological features on photomicrographs used to be cumbersome, resource-intensive, and limited to specialists and specialized laboratories. Recent technological advances like highly efficient automated whole slide digitizer (scanner) systems, innovative IA platforms, and the emergence of pathologist-friendly image annotation and analysis systems mean that quantification of features on histological digital images will become increasingly prominent in pathologists' daily professional lives. The added value of quantitative IA in pathology includes confirmation of equivocal findings noted by a pathologist, increasing the sensitivity of feature detection, quantification of signal intensity, and improving efficiency. There is no denying that quantitative IA is part of the future of pathology; however, there are also several potential pitfalls when trying to estimate volumetric features from limited 2-dimensional sections. This continuing education session on quantitative IA offered a broad overview of the field; a hands-on toxicologic pathologist experience with IA principles, tools, and workflows; a discussion on how to apply basic stereology principles in order to minimize bias in IA; and finally, a reflection on the future of IA in the toxicologic pathology field.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
Brain tumor classification using AFM in combination with data mining techniques.
Huml, Marlene; Silye, René; Zauner, Gerald; Hutterer, Stephan; Schilcher, Kurt
2013-01-01
Although classification of astrocytic tumors is standardized by the WHO grading system, which is mainly based on microscopy-derived, histomorphological features, there is great interobserver variability. The main causes are thought to be the complexity of morphological details varying from tumor to tumor and from patient to patient, variations in the technical histopathological procedures like staining protocols, and finally the individual experience of the diagnosing pathologist. Thus, to raise astrocytoma grading to a more objective standard, this paper proposes a methodology based on atomic force microscopy (AFM) derived images made from histopathological samples in combination with data mining techniques. By comparing AFM images with corresponding light microscopy images of the same area, the progressive formation of cavities due to cell necrosis was identified as a typical morphological marker for a computer-assisted analysis. Using genetic programming as a tool for feature analysis, a best model was created that achieved 94.74% classification accuracy in distinguishing grade II tumors from grade IV ones. While utilizing modern image analysis techniques, AFM may become an important tool in astrocytic tumor diagnosis. By this way patients suffering from grade II tumors are identified unambiguously, having a less risk for malignant transformation. They would benefit from early adjuvant therapies.
Shoepe, Todd C; Cavedon, Dana K; Derian, Joseph M; Levy, Celine S; Morales, Amy
2015-01-01
Anatomical education is a dynamic field where developments in the implementation of constructive, situated-learning show promise in improving student achievement. The purpose of this study was to examine the effectiveness of an individualized, technology heavy project in promoting student performance in a combined anatomy and physiology laboratory course. Mixed-methods research was used to compare two cohorts of anatomy laboratories separated by the adoption of a new laboratory atlas project, which were defined as preceding (PRE) and following the adoption of the Anatomical Teaching and Learning Assessment Study (ATLAS; POST). The ATLAS project required the creation of a student-generated, photographic atlas via acquisition of specimen images taken with tablet technology and digital microscope cameras throughout the semester. Images were transferred to laptops, digitally labeled and photo edited weekly, and compiled into a digital book using Internet publishing freeware for final project submission. An analysis of covariance confirmed that student final examination scores were improved (P < 0.05) following the implementation of the laboratory atlas project (PRE, n = 75; POST, n = 90; means ± SE; 74.9 ± 0.9 versus 78.1 ± 0.8, respectively) after controlling for cumulative student grade point average. Analysis of questionnaires collected (n = 68) from the post group suggested students identified with atlas objectives, appreciated the comprehensive value in final examination preparation, and the constructionism involved, but recommended alterations in assignment logistics and the format of the final version. Constructionist, comprehensive term-projects utilizing student-preferred technologies could be used to improve performance toward student learning outcomes. © 2014 American Association of Anatomists.
Gianfrancesco, Anthony G; Tselev, Alexander; Baddorf, Arthur P; Kalinin, Sergei V; Vasudevan, Rama K
2015-11-13
The controlled growth of epitaxial films of complex oxides requires an atomistic understanding of key parameters determining final film morphology, such as termination dependence on adatom diffusion, and height of the Ehrlich-Schwoebel (ES) barrier. Here, through an in situ scanning tunneling microscopy study of mixed-terminated La5/8Ca3/8MnO3 (LCMO) films, we image adatoms and observe pile-up at island edges. Image analysis allows determination of the population of adatoms at the edge of islands and fractions on A-site and B-site terminations. A simple Monte-Carlo model, simulating the random walk of adatoms on a sinusoidal potential landscape using Boltzmann statistics is used to reproduce the experimental data, and provides an estimate of the ES barrier as ∼0.18 ± 0.04 eV at T = 1023 K, similar to those of metal adatoms on metallic surfaces. These studies highlight the utility of in situ imaging, in combination with basic Monte-Carlo methods, in elucidating the factors which control the final film growth in complex oxides.
NASA Astrophysics Data System (ADS)
Gianfrancesco, Anthony G.; Tselev, Alexander; Baddorf, Arthur P.; Kalinin, Sergei V.; Vasudevan, Rama K.
2015-11-01
The controlled growth of epitaxial films of complex oxides requires an atomistic understanding of key parameters determining final film morphology, such as termination dependence on adatom diffusion, and height of the Ehrlich-Schwoebel (ES) barrier. Here, through an in situ scanning tunneling microscopy study of mixed-terminated La5/8Ca3/8MnO3 (LCMO) films, we image adatoms and observe pile-up at island edges. Image analysis allows determination of the population of adatoms at the edge of islands and fractions on A-site and B-site terminations. A simple Monte-Carlo model, simulating the random walk of adatoms on a sinusoidal potential landscape using Boltzmann statistics is used to reproduce the experimental data, and provides an estimate of the ES barrier as ˜0.18 ± 0.04 eV at T = 1023 K, similar to those of metal adatoms on metallic surfaces. These studies highlight the utility of in situ imaging, in combination with basic Monte-Carlo methods, in elucidating the factors which control the final film growth in complex oxides.
Early Change in Stroke Size Performs Best in Predicting Response to Therapy.
Simpkins, Alexis Nétis; Dias, Christian; Norato, Gina; Kim, Eunhee; Leigh, Richard
2017-01-01
Reliable imaging biomarkers of response to therapy in acute stroke are needed. The final infarct volume and percent of early reperfusion have been used for this purpose. Early fluctuation in stroke size is a recognized phenomenon, but its utility as a biomarker for response to therapy has not been established. This study examined the clinical relevance of early change in stroke volume and compared it with the final infarct volume and percent of early reperfusion in identifying early neurologic improvement (ENI). Acute stroke patients, enrolled between 2013 and 2014 with serial magnetic resonance imaging (MRI) scans (pretreatment baseline, 2 h post, and 24 h post), who received thrombolysis were included in the analysis. Early change in stroke volume, infarct volume at 24 h on diffusion, and percent of early reperfusion were calculated from the baseline and 2 h MRI scans were compared. ENI was defined as ≥4 point decrease in National Institutes of Health Stroke Scales within 24 h. Logistic regression models and receiver operator characteristics analysis were used to compare the efficacy of 3 imaging biomarkers. Serial MRIs of 58 acute stroke patients were analyzed. Early change in stroke volume was significantly associated with ENI by logistic regression analysis (OR 0.93, p = 0.048) and remained significant after controlling for stroke size and severity (OR 0.90, p = 0.032). Thus, for every 1 mL increase in stroke volume, there was a 10% decrease in the odds of ENI, while for every 1 mL decrease in stroke volume, there was a 10% increase in the odds of ENI. Neither infarct volume at 24 h nor percent of early reperfusion were significantly associated with ENI by logistic regression. Receiver-operator characteristic analysis identified early change in stroke volume as the only biomarker of the 3 that performed significantly different than chance (p = 0.03). Early fluctuations in stroke size may represent a more reliable biomarker for response to therapy than the more traditional measures of final infarct volume and percent of early reperfusion. © 2017 S. Karger AG, Basel.
What Is A Picture Archiving And Communication System (PACS)?
NASA Astrophysics Data System (ADS)
Marceau, Carla
1982-01-01
A PACS is a digital system for acquiring, storing, moving and displaying picture or image information. It is an alternative to film jackets that has been made possible by recent breakthroughs in computer technology: telecommunications, local area nets and optical disks. The fundamental concept of the digital representation of image information is introduced. It is shown that freeing images from a material representation on film or paper leads to a dramatic increase in flexibility in our use of the images. The ultimate goal of a medical PACS system is a radiology department without film jackets. The inherent nature of digital images and the power of the computer allow instant free "copies" of images to be made and thrown away. These copies can be transmitted to distant sites in seconds, without the "original" ever leaving the archives of the radiology department. The result is a radiology department with much freer access to patient images and greater protection against lost or misplaced image information. Finally, images in digital form can be treated as data for the computer in image processing, which includes enhancement, reconstruction and even computer-aided analysis.
NASA Technical Reports Server (NTRS)
Gordon, H. R.; Evans, R. H.
1993-01-01
In a recent paper Eckstein and Simpson describe what they believe to be serious difficulties and/or errors with the CZCS (Coastal Zone Color Scanner) processing algorithms based on their analysis of seven images. Here we point out that portions of their analysis, particularly those dealing with multiple scattered Rayleigh radiance, are incorrect. We also argue that other problems they discuss have already been addressed in the literature. Finally, we suggest that many apparent artifacts in CZCS-derived pigment fields are likely to be due to inadequacies in the sensor band set or to poor radiometric stability, both of which will be remedied with the next generation of ocean color sensors.
RICH detectors: Analysis methods and their impact on physics
NASA Astrophysics Data System (ADS)
Križan, Peter
2017-12-01
The paper discusses the importance of particle identification in particle physics experiments, and reviews the impact of ring imaging Cherenkov (RICH) counters in experiments that are currently running, or are under construction. Several analysis methods are discussed that are needed to calibrate a RICH counter, and to align its components with the rest of the detector. Finally, methods are reviewed on how to employ the collected data to efficiently separate one particle species from the other.
Presence of pro-tobacco messages on the Web.
Hong, Traci; Cody, Michael J
2002-01-01
Ignored in the finalized Master Settlement Agreement (National Association of Attorneys General, 1998), the unmonitored, unregulated World Wide Web (Web) can operate as a major vehicle for delivering pro-tobacco messages, images, and products to millions of young consumers. A content analysis of 318 randomly sampled pro-tobacco Web sites revealed that tobacco has a pervasive presence on the Web, especially on e-commerce sites and sites featuring hobbies, recreation, and "fetishes." Products can be ordered online on nearly 50% of the sites, but only 23% of the sites included underage verification. Further, only 11% of these sites contain health warnings. Instead, pro-tobacco sites frequently associate smoking with "glamorous" and "alternative" lifestyles, and with images of young males and young (thin, attractive) females. Finally, many of the Web sites offered interactive site features that are potentially appealing to young Web users. Recommendations for future research and counterstrategies are discussed.
Analysis Of The IJCNN 2011 UTL Challenge
2012-01-13
large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...validation and final evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205...documents [3]. Transfer learning methods could accelerate the application of handwriting recognizers to historical manuscript by reducing the need for
Molecular imaging in neuroendocrine tumors: molecular uptake mechanisms and clinical results.
Koopmans, Klaas P; Neels, Oliver N; Kema, Ido P; Elsinga, Philip H; Links, Thera P; de Vries, Elisabeth G E; Jager, Pieter L
2009-09-01
Neuroendocrine tumors can originate almost everywhere in the body and consist of a great variety of subtypes. This paper focuses on molecular imaging methods using nuclear medicine techniques in neuroendocrine tumors, coupling molecular uptake mechanisms of radiotracers with clinical results. A non-systematic review is presented on receptor based and metabolic imaging methods. Receptor-based imaging covers the molecular backgrounds of somatostatin, vaso-intestinal peptide (VIP), bombesin and cholecystokinin (CCK) receptors and their link with nuclear imaging. Imaging methods based on specific metabolic properties include meta-iodo-benzylguanide (MIBG) and dimercapto-sulphuric acid (DMSA-V) scintigraphy as well as more modern positron emission tomography (PET)-based methods using radio-labeled analogues of amino acids, glucose, dihydroxyphenylalanine (DOPA), dopamine and tryptophan. Diagnostic sensitivities are presented for each imaging method and for each neuroendocrine tumor subtype. Finally, a Forest plot analysis of diagnostic performance is presented for each tumor type in order to provide a comprehensive overview for clinical use.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Human body region enhancement method based on Kinect infrared imaging
NASA Astrophysics Data System (ADS)
Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing
2016-10-01
To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
2016-08-09
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
High-throughput high-volume nuclear imaging for preclinical in vivo compound screening§.
Macholl, Sven; Finucane, Ciara M; Hesterman, Jacob; Mather, Stephen J; Pauplis, Rachel; Scully, Deirdre; Sosabowski, Jane K; Jouannot, Erwan
2017-12-01
Preclinical single-photon emission computed tomography (SPECT)/CT imaging studies are hampered by low throughput, hence are found typically within small volume feasibility studies. Here, imaging and image analysis procedures are presented that allow profiling of a large volume of radiolabelled compounds within a reasonably short total study time. Particular emphasis was put on quality control (QC) and on fast and unbiased image analysis. 2-3 His-tagged proteins were simultaneously radiolabelled by 99m Tc-tricarbonyl methodology and injected intravenously (20 nmol/kg; 100 MBq; n = 3) into patient-derived xenograft (PDX) mouse models. Whole-body SPECT/CT images of 3 mice simultaneously were acquired 1, 4, and 24 h post-injection, extended to 48 h and/or by 0-2 h dynamic SPECT for pre-selected compounds. Organ uptake was quantified by automated multi-atlas and manual segmentations. Data were plotted automatically, quality controlled and stored on a collaborative image management platform. Ex vivo uptake data were collected semi-automatically and analysis performed as for imaging data. >500 single animal SPECT images were acquired for 25 proteins over 5 weeks, eventually generating >3500 ROI and >1000 items of tissue data. SPECT/CT images clearly visualized uptake in tumour and other tissues even at 48 h post-injection. Intersubject uptake variability was typically 13% (coefficient of variation, COV). Imaging results correlated well with ex vivo data. The large data set of tumour, background and systemic uptake/clearance data from 75 mice for 25 compounds allows identification of compounds of interest. The number of animals required was reduced considerably by longitudinal imaging compared to dissection experiments. All experimental work and analyses were accomplished within 3 months expected to be compatible with drug development programmes. QC along all workflow steps, blinding of the imaging contract research organization to compound properties and automation provide confidence in the data set. Additional ex vivo data were useful as a control but could be omitted from future studies in the same centre. For even larger compound libraries, radiolabelling could be expedited and the number of imaging time points adapted to increase weekly throughput. Multi-atlas segmentation could be expanded via SPECT/MRI; however, this would require an MRI-compatible mouse hotel. Finally, analysis of nuclear images of radiopharmaceuticals in clinical trials may benefit from the automated analysis procedures developed.
Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography.
Hughes, Nathan; Askew, Karen; Scotson, Callum P; Williams, Kevin; Sauze, Colin; Corke, Fiona; Doonan, John H; Nibau, Candida
2017-01-01
Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT). The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.
NASA Astrophysics Data System (ADS)
Kharazmi, Pegah; Lui, Harvey; Stoecker, William V.; Lee, Tim
2015-03-01
Vascular structures are one of the most important features in the diagnosis and assessment of skin disorders. The presence and clinical appearance of vascular structures in skin lesions is a discriminating factor among different skin diseases. In this paper, we address the problem of segmentation of vascular patterns in dermoscopy images. Our proposed method is composed of three parts. First, based on biological properties of human skin, we decompose the skin to melanin and hemoglobin component using independent component analysis of skin color images. The relative quantities and pure color densities of each component were then estimated. Subsequently, we obtain three reference vectors of the mean RGB values for normal skin, pigmented skin and blood vessels from the hemoglobin component by averaging over 100000 pixels of each group outlined by an expert. Based on the Euclidean distance thresholding, we generate a mask image that extracts the red regions of the skin. Finally, Frangi measure was applied to the extracted red areas to segment the tubular structures. Finally, Otsu's thresholding was applied to segment the vascular structures and get a binary vessel mask image. The algorithm was implemented on a set of 50 dermoscopy images. In order to evaluate the performance of our method, we have artificially extended some of the existing vessels in our dermoscopy data set and evaluated the performance of the algorithm to segment the newly added vessel pixels. A sensitivity of 95% and specificity of 87% were achieved.
Boix, Macarena; Cantó, Begoña
2013-04-01
Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
An Unsupervised Approach for Extraction of Blood Vessels from Fundus Images.
Dash, Jyotiprava; Bhoi, Nilamani
2018-04-26
Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.
A novel underwater dam crack detection and classification approach based on sonar images
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments. PMID:28640925
A novel underwater dam crack detection and classification approach based on sonar images.
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.
Apostolou, N; Papazoglou, Th; Koutsouris, D
2006-01-01
Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.
Hugelier, Siewert; Vitale, Raffaele; Ruckebusch, Cyril
2018-03-01
This article explores smoothing with edge-preserving properties as a spatial constraint for the resolution of hyperspectral images with multivariate curve resolution-alternating least squares (MCR-ALS). For each constrained component image (distribution map), irrelevant spatial details and noise are smoothed applying an L 1 - or L 0 -norm penalized least squares regression, highlighting in this way big changes in intensity of adjacent pixels. The feasibility of the constraint is demonstrated on three different case studies, in which the objects under investigation are spatially clearly defined, but have significant spectral overlap. This spectral overlap is detrimental for obtaining a good resolution and additional spatial information should be provided. The final results show that the spatial constraint enables better image (map) abstraction, artifact removal, and better interpretation of the results obtained, compared to a classical MCR-ALS analysis of hyperspectral images.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
Chapter 14: Electron Microscopy on Thin Films for Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Manuel; Abou-Ras, Daniel; Nichterwitz, Melanie
2016-07-22
This chapter overviews the various techniques applied in scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and highlights their possibilities and also limitations. It gives the various imaging and analysis techniques applied on a scanning electron microscope. The chapter shows that imaging is divided into that making use of secondary electrons (SEs) and of backscattered electrons (BSEs), resulting in different contrasts in the images and thus providing information on compositions, microstructures, and surface potentials. Whenever aiming for imaging and analyses at scales of down to the angstroms range, TEM and its related techniques are appropriate tools. In many cases,more » also SEM techniques provide the access to various material properties of the individual layers, not requiring specimen preparation as time consuming as TEM techniques. Finally, the chapter dedicates to cross-sectional specimen preparation for electron microscopy. The preparation decides indeed on the quality of imaging and analyses.« less
Object localization in handheld thermal images for fireground understanding
NASA Astrophysics Data System (ADS)
Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven
2017-05-01
Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.
Rhee, H; Thomas, P; Shepherd, B; Gustafson, S; Vela, I; Russell, P J; Nelson, C; Chung, E; Wood, G; Malone, G; Wood, S; Heathcote, P
2016-10-01
Positron emission tomography using ligands targeting prostate specific membrane antigen has recently been introduced. Positron emission tomography imaging with (68)Ga-PSMA-HBED-CC has been shown to detect metastatic prostate cancer lesions at a high rate. In this study we compare multiparametric magnetic resonance imaging and prostate specific membrane antigen positron emission tomography of the prostate with whole mount ex vivo prostate histopathology to determine the true sensitivity and specificity of these imaging modalities for detecting and locating tumor foci within the prostate. In a prospective clinical trial setting 20 patients with localized prostate cancer and a planned radical prostatectomy were recruited. All patients underwent multiparametric magnetic resonance imaging and positron emission tomography before surgery, and whole mount histopathology slides were directly compared to the images. European Society of Urogenital Radiology guidelines for reporting magnetic resonance imaging were used as a template for regional units of analysis. The uropathologist and radiologists were blinded to individual components of the study, and the final correlation was performed by visual and deformable registration analysis. A total of 50 clinically significant lesions were identified from the whole mount histopathological analysis. Based on regional analysis the sensitivity, specificity, positive predictive value and negative predictive value for multiparametric magnetic resonance imaging were 44%, 94%, 81% and 76%, respectively. With prostate specific membrane antigen positron emission tomography the sensitivity, specificity, positive predictive value and negative predictive value were 49%, 95%, 85% and 88%, respectively. Prostate specific membrane antigen positron emission tomography yielded a higher specificity and positive predictive value. A significant proportion of cancers are potentially missed and underestimated by both imaging modalities. Prostate specific membrane antigen positron emission tomography may be used in addition to multiparametric magnetic resonance imaging to help improve local staging in those patients undergoing retropubic radical prostatectomy. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
High-resolution lithospheric imaging with seismic interferometry
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; Campman, Xander; Draganov, Deyan; Wapenaar, Kees
2010-10-01
In recent years, there has been an increase in the deployment of relatively dense arrays of seismic stations. The availability of spatially densely sampled global and regional seismic data has stimulated the adoption of industry-style imaging algorithms applied to converted- and scattered-wave energy from distant earthquakes, leading to relatively high-resolution images of the lower crust and upper mantle. We use seismic interferometry to extract reflection responses from the coda of transmitted energy from distant earthquakes. In theory, higher-resolution images can be obtained when migrating reflections obtained with seismic interferometry rather than with conversions, traditionally used in lithospheric imaging methods. Moreover, reflection data allow the straightforward application of algorithms previously developed in exploration seismology. In particular, the availability of reflection data allows us to extract from it a velocity model using standard multichannel data-processing methods. However, the success of our approach relies mainly on a favourable distribution of earthquakes. In this paper, we investigate how the quality of the reflection response obtained with interferometry is influenced by the distribution of earthquakes and the complexity of the transmitted wavefields. Our analysis shows that a reasonable reflection response could be extracted if (1) the array is approximately aligned with an active zone of earthquakes, (2) different phase responses are used to gather adequate angular illumination of the array and (3) the illumination directions are properly accounted for during processing. We illustrate our analysis using a synthetic data set with similar illumination and source-side reverberation characteristics as field data recorded during the 2000-2001 Laramie broad-band experiment. Finally, we apply our method to the Laramie data, retrieving reflection data. We extract a 2-D velocity model from the reflections and use this model to migrate the data. On the final reflectivity image, we observe a discontinuity in the reflections. We interpret this discontinuity as the Cheyenne Belt, a suture zone between Archean and Proterozoic terranes.
Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.
2016-01-01
In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528
NASA Astrophysics Data System (ADS)
Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas
2018-05-01
Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.
Comparison of eye imaging pattern recognition using neural network
NASA Astrophysics Data System (ADS)
Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.
2015-05-01
The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.
Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor
2011-01-01
Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.
On-orbit point spread function estimation for THEOS imaging system
NASA Astrophysics Data System (ADS)
Khetkeeree, Suphongsa; Liangrocapart, Sompong
2018-03-01
In this paper, we present two approaches for net Point Spread Function (net-PSF) estimation of Thailand Earth Observation System (THEOS) imaging system. In the first approach, we estimate the net- PSF by employing the specification information of the satellite. The analytic model of the net- PSF based on the simple model of push-broom imaging system. This model consists of a scanner, optical system, detector and electronics system. The mathematical PSF model of each component is demonstrated in spatial domain. In the second approach, the specific target images from THEOS imaging system are analyzed to determine the net-PSF. For panchromatic imaging system, the images of the checkerboard target at Salon de Provence airport are used to analysis the net-PSF by slant-edge method. For multispectral imaging system, the new man-made targets are proposed. It is a pier bridge in Lamchabang, Chonburi, Thailand. This place has had a lot of bridges which have several width sizes and orientation. The pulse method is used to analysis the images of this bridge for estimating the net-PSF. Finally, the Full Width at Half Maximums (FWHMs) of the net-PSF of both approaches is compared. The results show that both approaches coincide and all Modulation Transfer Functions (MTFs) at Nyquist of both approaches are better than the requirement. However, the FWHM of multispectral system more deviate than panchromatic system, because the targets are not specially constructed for estimating the characteristics of the satellite imaging system.
On a Mathematical Theory of Coded Exposure
2014-08-01
formulae that give the MSE and SNR of the final crisp image 1. Assumes the Shannon-Whittaker framework that i) requires band limited (with a fre...represents the ideal crisp image, i.e., the image that one would observed if there were no noise whatsoever, no motion, with a perfect optical system...discrete. In addition, the image obtained by a coded exposure camera requires to undergo a deconvolution to get the final crisp image. Note that the
Chen, Feng; Suzuki, Yasuhiro; Nagai, Nobuo; Peeters, Ronald; Marchal, Guy; Ni, Yicheng
2005-01-30
The purpose of the present animal experiment was to determine whether source images from dynamic susceptibility contrast-enhanced perfusion weighted imaging (DSC-PWI) at a 1.5T MR scanner, performed early after photochemically induced thrombosis (PIT) of cerebral middle artery (MCA), is feasible to predict final cerebral infarct size in a rat stroke model. Fifteen rats were subjected to PIT of proximal MCA. T2 weighted imaging (T2WI), diffusion-weighted imaging (DWI), and contrast-enhanced PWI were obtained at 1 h and 24 h after MCA occlusion. The relative lesion size (RLS) was defined as lesion volume/brain volume x 100% and measured for MR images, and compared with the final RLS on the gold standard triphenyl tetrazolium chloride (TTC) staining at 24 h. One hour after MCA occlusion, the RLS with DSC-PWI was 24.9 +/- 6.3%, which was significantly larger than 17.6 +/- 4.8% with DWI (P < 0.01). At 24 h, the final RLS on TTC was 24.3 +/- 4.8%, which was comparable to 25.1 +/- 3.5%, 24.6 +/- 3.6% and 27.9 +/- 6.8% with T2WI, DWI and DSC-PWI respectively (P > 0.05). The fact that at 1 h after MCA occlusion only the displayed perfusion deficit was similar to the final infarct size on TTC (P > 0.05) suggests that early source images from DSC-PWI at 1.5T MR scanner is feasible to noninvasively predict the final infarct size in rat models of stroke.
NASA Astrophysics Data System (ADS)
Fu, Yan; Guo, Pei-yuan; Xiang, Ling-zi; Bao, Man; Chen, Xing-hai
2013-08-01
With the gradually mature of hyper spectral image technology, the application of the meat nondestructive detection and recognition has become one of the current research focuses. This paper for the study of marine and freshwater fish by the pre-processing and feature extraction of the collected spectral curve data, combined with BP network structure and LVQ network structure, a predictive model of hyper spectral image data of marine and freshwater fish has been initially established and finally realized the qualitative analysis and identification of marine and freshwater fish quality. The results of this study show that hyper spectral imaging technology combined with the BP and LVQ Artificial Neural Network Model can be used for the identification of marine and freshwater fish detection. Hyper-spectral data acquisition can be carried out without any pretreatment of the samples, thus hyper-spectral imaging technique is the lossless, high- accuracy and rapid detection method for quality of fish. In this study, only 30 samples are used for the exploratory qualitative identification of research, although the ideal study results are achieved, we will further increase the sample capacity to take the analysis of quantitative identification and verify the feasibility of this theory.
Validity and reliability of a scale to measure genital body image.
Zielinski, Ruth E; Kane-Low, Lisa; Miller, Janis M; Sampselle, Carolyn
2012-01-01
Women's body image dissatisfaction extends to body parts usually hidden from view--their genitals. Ability to measure genital body image is limited by lack of valid and reliable questionnaires. We subjected a previously developed questionnaire, the Genital Self Image Scale (GSIS) to psychometric testing using a variety of methods. Five experts determined the content validity of the scale. Then using four participant groups, factor analysis was performed to determine construct validity and to identify factors. Further construct validity was established using the contrasting groups approach. Internal consistency and test-retest reliability was determined. Twenty one of 29 items were considered content valid. Two items were added based on expert suggestions. Factor analysis was undertaken resulting in four factors, identified as Genital Confidence, Appeal, Function, and Comfort. The revised scale (GSIS-20) included 20 items explaining 59.4% of the variance. Women indicating an interest in genital cosmetic surgery exhibited significantly lower scores on the GSIS-20 than those who did not. The final 20 item scale exhibited internal reliability across all sample groups as well as test-retest reliability. The GSIS-20 provides a measure of genital body image demonstrating reliability and validity across several populations of women.
NASA Astrophysics Data System (ADS)
Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen
2017-06-01
This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.
NASA Astrophysics Data System (ADS)
Bethmann, F.; Jepping, C.; Luhmann, T.
2013-04-01
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Measurement of RBC agglutination with microscopic cell image analysis in a microchannel chip.
Cho, Chi Hyun; Kim, Ju Yeon; Nyeck, Agnes E; Lim, Chae Seung; Hur, Dae Sung; Chung, Chanil; Chang, Jun Keun; An, Seong Soo A; Shin, Sehyun
2014-01-01
Since Landsteiner's discovery of ABO blood groups, RBC agglutination has been one of the most important immunohematologic techniques for ABO and RhD blood groupings. The conventional RBC agglutination grading system for RhD blood typings relies on macroscopic reading, followed by the assignment of a grade ranging from (-) to (4+) to the degree of red blood cells clumping. However, with the new scoring method introduced in this report, microscopically captured cell images of agglutinated RBCs, placed in a microchannel chip, are used for analysis. Indeed, the cell images' pixel number first allows the differentiation of agglutinated and non-agglutinated red blood cells. Finally, the ratio of agglutinated RBCs per total RBC counts (CRAT) from 90 captured images is then calculated. During the trial, it was observed that the agglutinated group's CRAT was significantly higher (3.77-0.003) than that of the normal control (0). Based on these facts, it was established that the microchannel method was more suitable for the discrimination between agglutinated RBCs and non-agglutinated RhD negative, and thus more reliable for the grading of RBCs agglutination than the conventional method.
Setup and use of a two-laser multiphoton microscope for multichannel intravital fluorescence imaging
Entenberg, David; Wyckoff, Jeffrey; Gligorijevic, Bojana; Roussos, Evanthia T; Verkhusha, Vladislav V; Pollard, Jeffrey W; Condeelis, John
2014-01-01
Characterizing biological mechanisms dependent upon the interaction of many cell types in vivo requires both multiphoton microscope systems capable of expanding the number and types of fluorophores that can be imaged simultaneously while removing the wavelength and tunability restrictions of existing systems, and enhanced software for extracting critical cellular parameters from voluminous 4D data sets. We present a procedure for constructing a two-laser multiphoton microscope that extends the wavelength range of excitation light, expands the number of simultaneously usable fluorophores and markedly increases signal to noise via ‘over-clocking’ of detection. We also utilize a custom-written software plug-in that simplifies the quantitative tracking and analysis of 4D intravital image data. We begin by describing the optics, hardware, electronics and software required, and finally the use of the plug-in for analysis. We demonstrate the use of the setup and plug-in by presenting data collected via intravital imaging of a mouse model of breast cancer. The procedure may be completed in ~24 h. PMID:21959234
Performance Analysis of Visible Light Communication Using CMOS Sensors.
Do, Trong-Hop; Yoo, Myungsik
2016-02-29
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.
Performance Analysis of Visible Light Communication Using CMOS Sensors
Do, Trong-Hop; Yoo, Myungsik
2016-01-01
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535
Sandquist, Mary K; Clee, Mark S; Patel, Smruti K; Howard, Kelli A; Yunger, Toni; Nagaraj, Usha D; Jones, Blaise V; Fei, Lin; Vadivelu, Sudhakar; Wong, Hector R
2017-07-01
This study was intended to describe and correlate the neuroimaging findings in pediatric patients after sepsis. Retrospective chart review. Single tertiary care PICU. Patients admitted to Cincinnati Children's Hospital Medical Center with a discharge diagnosis of sepsis or septic shock between 2004 and 2013 were crossmatched with patients who underwent neuroimaging during the same time period. All neuroimaging studies that occurred during or subsequent to a septic event were reviewed, and all new imaging findings were recorded and classified. As many patients experienced multiple septic events and/or had multiple neuroimaging studies after sepsis, our statistical analysis utilized the most recent or "final" imaging study available for each patient so that only brain imaging findings that persisted were included. A total of 389 children with sepsis and 1,705 concurrent or subsequent neuroimaging studies were included in the study. Median age at first septic event was 3.4 years (interquartile range, 0.7-11.5). Median time from first sepsis event to final neuroimaging was 157 days (interquartile range, 10-1,054). The most common indications for final imaging were follow-up (21%), altered mental status (18%), and fever/concern for infection (15%). Sixty-three percentage (n = 243) of final imaging studies demonstrated abnormal findings, the most common of which were volume loss (39%) and MRI signal and/or CT attenuation abnormalities (21%). On multivariable logistic regression, highest Pediatric Risk of Mortality score and presence of oncologic diagnosis/organ transplantation were independently associated with any abnormal final neuroimaging study findings (odds ratio, 1.032; p = 0.048 and odds ratio, 1.632; p = 0.041), although early timing of neuroimaging demonstrated a negative association (odds ratio, 0.606; p = 0.039). The most common abnormal finding of volume loss was independently associated with highest Pediatric Risk of Mortality score (odds ratio, 1.037; p = 0.016) and oncologic diagnosis/organ transplantation (odds ratio, 2.207; p = 0.001) and was negatively associated with early timing of neuroimaging (odds ratio, 0.575; p = 0.037). The majority of pediatric patients with sepsis and concurrent or subsequent neuroimaging have abnormal neuroimaging findings. The implications of this high incidence for long-term neurologic outcomes and follow-up require further exploration.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Remote Sensing Soil Moisture Analysis by Unmanned Aerial Vehicles Digital Imaging
NASA Astrophysics Data System (ADS)
Yeh, C. Y.; Lin, H. R.; Chen, Y. L.; Huang, S. Y.; Wen, J. C.
2017-12-01
In recent years, remote sensing analysis has been able to apply to the research of climate change, environment monitoring, geology, hydro-meteorological, and so on. However, the traditional methods for analyzing wide ranges of surface soil moisture of spatial distribution surveys may require plenty resources besides the high cost. In the past, remote sensing analysis performed soil moisture estimates through shortwave, thermal infrared ray, or infrared satellite, which requires lots of resources, labor, and money. Therefore, the digital image color was used to establish the multiple linear regression model. Finally, we can find out the relationship between surface soil color and soil moisture. In this study, we use the Unmanned Aerial Vehicle (UAV) to take an aerial photo of the fallow farmland. Simultaneously, we take the surface soil sample from 0-5 cm of the surface. The soil will be baking by 110° C and 24 hr. And the software ImageJ 1.48 is applied for the analysis of the digital images and the hue analysis into Red, Green, and Blue (R, G, B) hue values. The correlation analysis is the result from the data obtained from the image hue and the surface soil moisture at each sampling point. After image and soil moisture analysis, we use the R, G, B and soil moisture to establish the multiple regression to estimate the spatial distributions of surface soil moisture. In the result, we compare the real soil moisture and the estimated soil moisture. The coefficient of determination (R2) can achieve 0.5-0.7. The uncertainties in the field test, such as the sun illumination, the sun exposure angle, even the shadow, will affect the result; therefore, R2 can achieve 0.5-0.7 reflects good effect for the in-suit test by using the digital image to estimate the soil moisture. Based on the outcomes of the research, using digital images from UAV to estimate the surface soil moisture is acceptable. However, further investigations need to be collected more than ten days (four times a day) data to verify the relation between the image hue and the soil moisture for reliable moisture estimated model. And it is better to use the digital single lens reflex camera to prevent the deformation of the image and to have a better auto exposure. Keywords: soil, moisture, remote sensing
NASA Astrophysics Data System (ADS)
Yu, Yali; Wang, Mengxia; Lima, Dimas
2018-04-01
In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.
Hyperspectral Image Classification using a Self-Organizing Map
NASA Technical Reports Server (NTRS)
Martinez, P.; Gualtieri, J. A.; Aguilar, P. L.; Perez, R. M.; Linaje, M.; Preciado, J. C.; Plaza, A.
2001-01-01
The use of hyperspectral data to determine the abundance of constituents in a certain portion of the Earth's surface relies on the capability of imaging spectrometers to provide a large amount of information at each pixel of a certain scene. Today, hyperspectral imaging sensors are capable of generating unprecedented volumes of radiometric data. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), for example, routinely produces image cubes with 224 spectral bands. This undoubtedly opens a wide range of new possibilities, but the analysis of such a massive amount of information is not an easy task. In fact, most of the existing algorithms devoted to analyzing multispectral images are not applicable in the hyperspectral domain, because of the size and high dimensionality of the images. The application of neural networks to perform unsupervised classification of hyperspectral data has been tested by several authors and also by us in some previous work. We have also focused on analyzing the intrinsic capability of neural networks to parallelize the whole hyperspectral unmixing process. The results shown in this work indicate that neural network models are able to find clusters of closely related hyperspectral signatures, and thus can be used as a powerful tool to achieve the desired classification. The present work discusses the possibility of using a Self Organizing neural network to perform unsupervised classification of hyperspectral images. In sections 3 and 4, the topology of the proposed neural network and the training algorithm are respectively described. Section 5 provides the results we have obtained after applying the proposed methodology to real hyperspectral data, described in section 2. Different parameters in the learning stage have been modified in order to obtain a detailed description of their influence on the final results. Finally, in section 6 we provide the conclusions at which we have arrived.
Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.
Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice
2017-08-01
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
GUIDOS: tools for the assessment of pattern, connectivity, and fragmentation
NASA Astrophysics Data System (ADS)
Vogt, Peter
2013-04-01
Pattern, connectivity, and fragmentation can be considered as pillars for a quantitative analysis of digital landscape images. The free software toolbox GUIDOS (http://forest.jrc.ec.europa.eu/download/software/guidos) includes a variety of dedicated methodologies for the quantitative assessment of these features. Amongst others, Morphological Spatial Pattern Analysis (MSPA) is used for an intuitive description of image pattern structures and the automatic detection of connectivity pathways. GUIDOS includes tools for the detection and quantitative assessment of key nodes and links as well as to define connectedness in raster images and to setup appropriate input files for an enhanced network analysis using Conefor Sensinode. Finally, fragmentation is usually defined from a species point of view but a generic and quantifiable indicator is needed to measure fragmentation and its changes. Some preliminary results for different conceptual approaches will be shown for a sample dataset. Complemented by pre- and post-processing routines and a complete GIS environment the portable GUIDOS Toolbox may facilitate a holistic assessment in risk assessment studies, landscape planning, and conservation/restoration policies. Alternatively, individual analysis components may contribute to or enhance studies conducted with other software packages in landscape ecology.
Optical design and optical properties of a VUV spectrographic imager for ICON mission
NASA Astrophysics Data System (ADS)
Loicq, Jerome; Kintziger, Christian; Mazzoli, Alexandra; Miller, Tim; Chou, Cathy; Frey, Harald U.; Immel, Thomas J.; Mende, Stephen B.
2016-07-01
In the frame of the ICON (Ionospheric Connection Explorer) mission of NASA led by UC Berkeley, CSL and SSL Berkeley have designed in cooperation a new Far UV spectro-imager. The instrument is based on a Czerny-Turner spectrograph coupled with two back imagers. The whole field of view covers [+/- 12° vertical, +/- 9° horizontal]. The instrument is surmounted by a rotating mirror to adjust the horizontal field of view pointing by +/- 30°. To meet the scientific imaging and spectral requirements the instrument has been optimized. The optimization philosophy and related analysis are presented in the present paper. PSF, distortion map and spectral properties are described. A tolerance study and alignment cases were performed to prove the instrument can be built and aligned. Finally straylight and out of band properties are discussed.
Carriles, Ramón; Schafer, Dawn N.; Sheetz, Kraig E.; Field, Jeffrey J.; Cisek, Richard; Barzda, Virginijus; Sylvester, Anne W.; Squier, Jeffrey A.
2009-01-01
We review the current state of multiphoton microscopy. In particular, the requirements and limitations associated with high-speed multiphoton imaging are considered. A description of the different scanning technologies such as line scan, multifoci approaches, multidepth microscopy, and novel detection techniques is given. The main nonlinear optical contrast mechanisms employed in microscopy are reviewed, namely, multiphoton excitation fluorescence, second harmonic generation, and third harmonic generation. Techniques for optimizing these nonlinear mechanisms through a careful measurement of the spatial and temporal characteristics of the focal volume are discussed, and a brief summary of photobleaching effects is provided. Finally, we consider three new applications of multiphoton microscopy: nonlinear imaging in microfluidics as applied to chemical analysis and the use of two-photon absorption and self-phase modulation as contrast mechanisms applied to imaging problems in the medical sciences. PMID:19725639
Song, Yang; Cai, Weidong; Feng, David Dagan; Chen, Mei
2013-01-01
Automated segmentation of cell nuclei in microscopic images is critical to high throughput analysis of the ever increasing amount of data. Although cell nuclei are generally visually distinguishable for human, automated segmentation faces challenges when there is significant intensity inhomogeneity among cell nuclei or in the background. In this paper, we propose an effective method for automated cell nucleus segmentation using a three-step approach. It first obtains an initial segmentation by extracting salient regions in the image, then reduces false positives using inter-region feature discrimination, and finally refines the boundary of the cell nuclei using intra-region contrast information. This method has been evaluated on two publicly available datasets of fluorescence microscopic images with 4009 cells, and has achieved superior performance compared to popular state of the art methods using established metrics.
Superpixel Based Factor Analysis and Target Transformation Method for Martian Minerals Detection
NASA Astrophysics Data System (ADS)
Wu, X.; Zhang, X.; Lin, H.
2018-04-01
The Factor analysis and target transformation (FATT) is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES) and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM) hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC) algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.
The on-site quality-assurance system for Hyper Suprime-Cam: OSQAH
NASA Astrophysics Data System (ADS)
Furusawa, Hisanori; Koike, Michitaro; Takata, Tadafumi; Okura, Yuki; Miyatake, Hironao; Lupton, Robert H.; Bickerton, Steven; Price, Paul A.; Bosch, James; Yasuda, Naoki; Mineo, Sogo; Yamada, Yoshihiko; Miyazaki, Satoshi; Nakata, Fumiaki; Koshida, Shintaro; Komiyama, Yutaka; Utsumi, Yousuke; Kawanomoto, Satoshi; Jeschke, Eric; Noumaru, Junichi; Schubert, Kiaina; Iwata, Ikuru; Finet, Francois; Fujiyoshi, Takuya; Tajitsu, Akito; Terai, Tsuyoshi; Lee, Chien-Hsiu
2018-01-01
We have developed an automated quick data analysis system for data quality assurance (QA) for Hyper Suprime-Cam (HSC). The system was commissioned in 2012-2014, and has been offered for general observations, including the HSC Subaru Strategic Program, since 2014 March. The system provides observers with data quality information, such as seeing, sky background level, and sky transparency, based on quick analysis as data are acquired. Quick-look images and validation of image focus are also provided through an interactive web application. The system is responsible for the automatic extraction of QA information from acquired raw data into a database, to assist with observation planning, assess progress of all observing programs, and monitor long-term efficiency variations of the instrument and telescope. Enhancements of the system are being planned to facilitate final data analysis, to improve the HSC archive, and to provide legacy products for astronomical communities.
The magnifying glass - A feature space local expansion for visual analysis. [and image enhancement
NASA Technical Reports Server (NTRS)
Juday, R. D.
1981-01-01
The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.
A Critical and Comparative Review of Fluorescent Tools for Live-Cell Imaging.
Specht, Elizabeth A; Braselmann, Esther; Palmer, Amy E
2017-02-10
Fluorescent tools have revolutionized our ability to probe biological dynamics, particularly at the cellular level. Fluorescent sensors have been developed on several platforms, utilizing either small-molecule dyes or fluorescent proteins, to monitor proteins, RNA, DNA, small molecules, and even cellular properties, such as pH and membrane potential. We briefly summarize the impressive history of tool development for these various applications and then discuss the most recent noteworthy developments in more detail. Particular emphasis is placed on tools suitable for single-cell analysis and especially live-cell imaging applications. Finally, we discuss prominent areas of need in future fluorescent tool development-specifically, advancing our capability to analyze and integrate the plethora of high-content data generated by fluorescence imaging.
Structural imaging in premanifest and manifest Huntington disease.
Scahill, Rachael I; Andre, Ralph; Tabrizi, Sarah J; Aylward, Elizabeth H
2017-01-01
Huntington disease (HD) neuropathology has a devastating effect on brain structure and consequently brain function; neuroimaging provides a means to assess these effects in gene carriers. In this chapter we first outline the unique utility of structural imaging in understanding HD and discuss some of the acquisition and analysis techniques currently available. We review the existing literature to summarize what we know so far about structural brain changes across the spectrum of disease from premanifest through to manifest disease. We then consider how these neuroimaging findings relate to patient function and nonimaging biomarkers, and can be used to predict disease onset. Finally we review the utility of imaging measures for assessment of treatment efficacy in clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
45,X/46,X dic (Y) mosaicism in a phenotypic male.
Batstone, P J; Faed, M J; Jung, R T; Gosden, J
1991-01-01
Cytogenetic analysis, confirmed by in situ hybridisation studies, showed a mosaic 45,X/46,X dic (Y) (q12) karyotype in a 14 year old boy who was initially diagnosed as having Noonan's syndrome. He made an early response to recombinant growth hormone; this suggests that this treatment may improve final height. Images Figure 1 Figure 2 PMID:2001114
Convolution Algebra for Fluid Modes with Finite Energy
1992-04-01
signals and systems analysis: the evaluation of the initial condition -or input- to a system given its final condition -or output- and its impulse ...Images Corrupted with Gaussian Blur .............. 30 III.. 5 Deblurring with Hermite-Rodriguez Wavelets 34 5.1 Introduction...66 25. Letter "T", which is diffused for t=12, and corrupted by additive noise at SNR’s = 1
Analyzing x-ray hotspot images with Ince-Gaussian modes
NASA Astrophysics Data System (ADS)
Kruse, Michael; Field, John; Nora, Ryan; Benedetti, Robin; Khan, Shahab; Ma, Tammy; Peterson, Luc; Spears, Brian
2017-10-01
X-ray images at the National Ignition Facility (NIF) provide important metrics regarding the shape of the hotspot along a given line-of-sight. The 17% contour from peak brightness is usually used to infer the size of the hotspot as well as determine shape perturbations quantified through the Legendre coefficients P2 and P4. Unfortunately features that lie inside the contour such as those that could arise from tent or fill-tube perturbations are not easily captured. An analysis that takes into account the two-dimensional nature of the x-ray image is desirable. Ince-Gaussian modes (for short: Ince) offer such an analysis and could provide a new way to encode and understand the images recorded at NIF. The Ince modes are the solutions to the paraxial wave equation expressed in elliptical coordinates and thus form an orthonormal basis. Due to their elliptical nature they are suitable for decomposing images that have a non-zero P2 or P4 coefficient. We show that the Ince modes can be used to uncover structure that is missed by the contour analysis and how the modes aid in compressing images produced in large ensemble calculations. Finally a comparison is made to the Zernike modes which form an orthonormal basis on a circular disk. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734741.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization
Chong, Shau Poh; Zhang, Tingwei; Kho, Aaron; Bernucci, Marcel T.; Dubra, Alfredo; Srinivasan, Vivek J.
2018-01-01
Chromatic aberrations are an important design consideration in high resolution, high bandwidth, refractive imaging systems that use visible light. Here, we present a fiber-based spectral/Fourier domain, visible light OCT ophthalmoscope corrected for the average longitudinal chromatic aberration (LCA) of the human eye. Analysis of complex speckles from in vivo retinal images showed that achromatization resulted in a speckle autocorrelation function that was ~20% narrower in the axial direction, but unchanged in the transverse direction. In images from the improved, achromatized system, the separation between Bruch’s membrane (BM), the retinal pigment epithelium (RPE), and the outer segment tips clearly emerged across the entire 6.5 mm field-of-view, enabling segmentation and morphometry of BM and the RPE in a human subject. Finally, cross-sectional images depicted distinct inner retinal layers with high resolution. Thus, with chromatic aberration compensation, visible light OCT can achieve volume resolutions and retinal image quality that matches or exceeds ultrahigh resolution near-infrared OCT systems with no monochromatic aberration compensation. PMID:29675296
Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging
Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.
2013-01-01
A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
Employing image processing techniques for cancer detection using microarray images.
Dehghan Khalilabad, Nastaran; Hassanpour, Hamid
2017-02-01
Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Tsumura, Norimichi
2018-02-01
In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.
Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme
2015-11-02
An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar; Mohammadi, Mohammad
2017-05-01
A combination of Finite Difference Time Domain (FDTD) and Monte Carlo (MC) methods is proposed for simulation and analysis of ZnO microscintillators grown in polycarbonate membrane. A planar 10 keV X-ray source irradiating the detector is simulated by MC method, which provides the amount of absorbed X-ray energy in the assembly. The transport of generated UV scintillation light and its propagation in the detector was studied by the FDTD method. Detector responses to different probable scintillation sites and under different energies of X-ray source from 10 to 25 keV are reported. Finally, the tapered geometry for the scintillators is proposed, which shows enhanced spatial resolution in comparison to cylindrical geometry for imaging applications.
NASA Astrophysics Data System (ADS)
Gea, S.; Zulfahmi, Z.; Yunus, D.; Andriayani, A.; Hutapea, Y. A.
2018-03-01
Cellulose nanofibrils were obtained from oil palm empty fruit bunch using steam explosion and hydrolized with 10% solution of HCl. Steam explosion coupled with acid hydrolysis pretreatment on the oil palm empty fruit bunch was very effective in the depolymerization and defibrillation process of the fibre to produce fibers in nanodimension. Structural analysis of steam exploded fibers was determined by Fourier Transform Infrared (FT-IR) spectroscopy. Thermal stability of cellulose measured using image analysis software image J. Characterization of the fibers by TEM and SEM displayed that fiber diameter decreases with mechanical-chemical treatment and final nanofibril size was 20-30 nm. FT-IR and TGA data confirmed the removal of hemicellulose and lignin during the chemical treatment process.
Quantitative imaging of protein targets in the human brain with PET
NASA Astrophysics Data System (ADS)
Gunn, Roger N.; Slifstein, Mark; Searle, Graham E.; Price, Julie C.
2015-11-01
PET imaging of proteins in the human brain with high affinity radiolabelled molecules has a history stretching back over 30 years. During this period the portfolio of protein targets that can be imaged has increased significantly through successes in radioligand discovery and development. This portfolio now spans six major categories of proteins; G-protein coupled receptors, membrane transporters, ligand gated ion channels, enzymes, misfolded proteins and tryptophan-rich sensory proteins. In parallel to these achievements in radiochemical sciences there have also been significant advances in the quantitative analysis and interpretation of the imaging data including the development of methods for image registration, image segmentation, tracer compartmental modeling, reference tissue kinetic analysis and partial volume correction. In this review, we analyze the activity of the field around each of the protein targets in order to give a perspective on the historical focus and the possible future trajectory of the field. The important neurobiology and pharmacology is introduced for each of the six protein classes and we present established radioligands for each that have successfully transitioned to quantitative imaging in humans. We present a standard quantitative analysis workflow for these radioligands which takes the dynamic PET data, associated blood and anatomical MRI data as the inputs to a series of image processing and bio-mathematical modeling steps before outputting the outcome measure of interest on either a regional or parametric image basis. The quantitative outcome measures are then used in a range of different imaging studies including tracer discovery and development studies, cross sectional studies, classification studies, intervention studies and longitudinal studies. Finally we consider some of the confounds, challenges and subtleties that arise in practice when trying to quantify and interpret PET neuroimaging data including motion artifacts, partial volume effects, age effects, image registration and normalization, input functions and metabolites, parametric imaging, receptor internalization and genetic factors.
A Time of Flight Fast Neutron Imaging System Design Study
NASA Astrophysics Data System (ADS)
Canion, Bonnie; Glenn, Andrew; Sheets, Steven; Wurtz, Ron; Nakae, Les; Hausladen, Paul; McConchie, Seth; Blackston, Matthew; Fabris, Lorenzo; Newby, Jason
2017-09-01
LLNL and ORNL are designing an active/passive fast neutron imaging system that is flexible to non-ideal detector positioning. It is often not possible to move an inspection object in fieldable imager applications such as safeguards, arms control treaty verification, and emergency response. Particularly, we are interested in scenarios which inspectors do not have access to all sides of an inspection object, due to interfering objects or walls. This paper will present the results of a simulation-based design parameter study, that will determine the optimum system design parameters for a fieldable system to perform time-of-flight based imaging analysis. The imaging analysis is based on the use of an associated particle imaging deuterium-tritium (API DT) neutron generator to get the time-of-flight of radiation induced within an inspection object. This design study will investigate the optimum design parameters for such a system (e.g. detector size, ideal placement, etc.), as well as the upper and lower feasible design parameters that the system can expect to provide results within a reasonable amount of time (e.g. minimum/maximum detector efficiency, detector standoff, etc.). Ideally the final prototype from this project will be capable of using full-access techniques, such as transmission imaging, when the measurement circumstances allow, but with the additional capability of producing results at reduced accessibility.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Daffara, C.; Parisotto, S.; Mariotti, P. I.
2015-06-01
Cultural Heritage is discovering how precious is thermal analysis as a tool to improve the restoration, thanks to its ability to inspect hidden details. In this work a novel dual mode imaging approach, based on the integration of thermography and thermal quasi-reflectography (TQR) in the mid-IR is demonstrated for an effective mapping of surface materials and of sub-surface detachments in mural painting. The tool was validated through a unique application: the "Monocromo" by Leonardo da Vinci in Italy. The dual mode acquisition provided two spatially aligned dataset: the TQR image and the thermal sequence. Main steps of the workflow included: 1) TQR analysis to map surface features and 2) to estimate the emissivity; 3) projection of the TQR frame on reference orthophoto and TQR mosaicking; 4) thermography analysis to map detachments; 5) use TQR to solve spatial referencing and mosaicking for the thermal-processed frames. Referencing of thermal images in the visible is a difficult aspect of the thermography technique that the dual mode approach allows to solve in effective way. We finally obtained the TQR and the thermal maps spatially referenced to the mural painting, thus providing the restorer a valuable tool for the restoration of the detachments.
Image analysis and mathematical modelling for the supervision of the dough fermentation process
NASA Astrophysics Data System (ADS)
Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd
2016-10-01
The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.
Simple Colorimetric Sensor for Trinitrotoluene Testing
NASA Astrophysics Data System (ADS)
Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.
2017-02-01
A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.
Framework for SEM contour analysis
NASA Astrophysics Data System (ADS)
Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.
2017-03-01
SEM images provide valuable information about patterning capability. Geometrical properties such as Critical Dimension (CD) can be extracted from them and are used to calibrate OPC models, thus making OPC more robust and reliable. However, there is currently a shortage of appropriate metrology tools to inspect complex two-dimensional patterns in the same way as one would work with simple one-dimensional patterns. In this article we present a full framework for the analysis of SEM images. It has been proven to be fast, reliable and robust for every type of structure, and particularly for two-dimensional structures. To achieve this result, several innovative solutions have been developed and will be presented in the following pages. Firstly, we will present a new noise filter which is used to reduce noise on SEM images, followed by an efficient topography identifier, and finally we will describe the use of a topological skeleton as a measurement tool that can extend CD measurements on all kinds of patterns.
Study on fracture identification of shale reservoir based on electrical imaging logging
NASA Astrophysics Data System (ADS)
Yu, Zhou; Lai, Fuqiang; Xu, Lei; Liu, Lin; Yu, Tong; Chen, Junyu; Zhu, Yuantong
2017-05-01
In recent years, shale gas exploration has made important development, access to a major breakthrough, in which the study of mud shale fractures is extremely important. The development of fractures has an important role in the development of gas reservoirs. Based on the core observation and the analysis of laboratory flakes and laboratory materials, this paper divides the lithology of the shale reservoirs of the XX well in Zhanhua Depression. Based on the response of the mudstone fractures in the logging curve, the fracture development and logging Response to the relationship between the conventional logging and electrical imaging logging to identify the fractures in the work, the final completion of the type of fractures in the area to determine and quantify the calculation of fractures. It is concluded that the fracture type of the study area is high and the microstructures are developed from the analysis of the XX wells in Zhanhua Depression. The shape of the fractures can be clearly seen by imaging logging technology to determine its type.
Design of a high-speed electrochemical scanning tunneling microscope.
Yanson, Y I; Schenkel, F; Rost, M J
2013-02-01
In this paper, we present a bottom-up approach to designing and constructing a high-speed electrochemical scanning tunneling microscope (EC-STM). Using finite element analysis (FEA) calculations of the frequency response of the whole mechanical loop of the STM, we analyzed several geometries to find the most stable one that could facilitate fast scanning. To test the FEA results, we conducted measurements of the vibration amplitudes using a prototype STM setup. Based on the FEA analysis and the measurement results, we identified the potentially most disturbing vibration modes that could impair fast scanning. By modifying the design of some parts of the EC-STM, we reduced the amplitudes as well as increased the resonance frequencies of these modes. Additionally, we designed and constructed an electrochemical flow-cell that allows STM imaging in a flowing electrolyte, and built a bi-potentiostat to achieve electrochemical potential control during the measurements. Finally, we present STM images acquired during high-speed imaging in air as well as in an electrochemical environment using our newly-developed EC-STM.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.
Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan
2017-07-01
In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drukker, Karen, E-mail: kdrukker@uchicago.edu; Giger, Maryellen L.; Li, Hui
2014-03-15
Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CBmore » alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.« less
Image-Based Macro-Micro Finite Element Models of a Canine Femur with Implant Design Implications
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Krishnan, Ganapathi; Dyce, Jonathan
2006-06-01
In this paper, a comprehensive model of a bone-cement-implant assembly is developed for a canine cemented femoral prosthesis system. Various steps in this development entail profiling the canine femur contours by computed tomography (CT) scanning, computer aided design (CAD) reconstruction of the canine femur from CT images, CAD modeling of the implant from implant blue prints and CAD modeling of the interface cement. Finite element analysis of the macroscopic assembly is conducted for stress analysis in individual components of the system, accounting for variation in density and material properties in the porous bone material. A sensitivity analysis is conducted with the macroscopic model to investigate the effect of implant design variables on the stress distribution in the assembly. Subsequently, rigorous microstructural analysis of the bone incorporating the morphological intricacies is conducted. Various steps in this development include acquisition of the bone microstructural data from histological serial sectioning, stacking of sections to obtain 3D renderings of void distributions, microstructural characterization and determination of properties and, finally, microstructural stress analysis using a 3D Voronoi cell finite element method. Generation of the simulated microstructure and analysis by the 3D Voronoi cell finite element model provides a new way of modeling complex microstructures and correlating to morphological characteristics. An inverse calculation of the material parameters of bone by combining macroscopic experiments with microstructural characterization and analysis provides a new approach to evaluating properties without having to do experiments at this scale. Finally, the microstructural stresses in the femur are computed using the 3D VCFEM to study the stress distribution at the scale of the bone porosity. Significant difference is observed between the macroscopic stresses and the peak microscopic stresses at different locations.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
Method to acquire regions of fruit, branch and leaf from image of red apple in orchard
NASA Astrophysics Data System (ADS)
Lv, Jidong; Xu, Liming
2017-07-01
This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.
CytoSpectre: a tool for spectral analysis of oriented structures on cellular and subcellular levels.
Kartasalo, Kimmo; Pölönen, Risto-Pekka; Ojala, Marisa; Rasku, Jyrki; Lekkala, Jukka; Aalto-Setälä, Katriina; Kallio, Pasi
2015-10-26
Orientation and the degree of isotropy are important in many biological systems such as the sarcomeres of cardiomyocytes and other fibrillar structures of the cytoskeleton. Image based analysis of such structures is often limited to qualitative evaluation by human experts, hampering the throughput, repeatability and reliability of the analyses. Software tools are not readily available for this purpose and the existing methods typically rely at least partly on manual operation. We developed CytoSpectre, an automated tool based on spectral analysis, allowing the quantification of orientation and also size distributions of structures in microscopy images. CytoSpectre utilizes the Fourier transform to estimate the power spectrum of an image and based on the spectrum, computes parameter values describing, among others, the mean orientation, isotropy and size of target structures. The analysis can be further tuned to focus on targets of particular size at cellular or subcellular scales. The software can be operated via a graphical user interface without any programming expertise. We analyzed the performance of CytoSpectre by extensive simulations using artificial images, by benchmarking against FibrilTool and by comparisons with manual measurements performed for real images by a panel of human experts. The software was found to be tolerant against noise and blurring and superior to FibrilTool when analyzing realistic targets with degraded image quality. The analysis of real images indicated general good agreement between computational and manual results while also revealing notable expert-to-expert variation. Moreover, the experiment showed that CytoSpectre can handle images obtained of different cell types using different microscopy techniques. Finally, we studied the effect of mechanical stretching on cardiomyocytes to demonstrate the software in an actual experiment and observed changes in cellular orientation in response to stretching. CytoSpectre, a versatile, easy-to-use software tool for spectral analysis of microscopy images was developed. The tool is compatible with most 2D images and can be used to analyze targets at different scales. We expect the tool to be useful in diverse applications dealing with structures whose orientation and size distributions are of interest. While designed for the biological field, the software could also be useful in non-biological applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavoie, Caroline; Higgins, Jane; Bissonnette, Jean-Pierre
2012-12-01
Purpose: To compare the relative accuracy of 2 image guided radiation therapy methods using carina vs spine as landmarks and then to identify which landmark is superior relative to tumor coverage. Methods and Materials: For 98 lung patients, 2596 daily image-guidance cone-beam computed tomography scans were analyzed. Tattoos were used for initial patient alignment; then, spine and carina registrations were performed independently. A separate analysis assessed the adequacy of gross tumor volume, internal target volume, and planning target volume coverage on cone-beam computed tomography using the initial, middle, and final fractions of radiation therapy. Coverage was recorded for primary tumormore » (T), nodes (N), and combined target (T+N). Three scenarios were compared: tattoos alignment, spine registration, and carina registration. Results: Spine and carina registrations identified setup errors {>=}5 mm in 35% and 46% of fractions, respectively. The mean vector difference between spine and carina matching had a magnitude of 3.3 mm. Spine and carina improved combined target coverage, compared with tattoos, in 50% and 34% (spine) to 54% and 46% (carina) of the first and final fractions, respectively. Carina matching showed greater combined target coverage in 17% and 23% of fractions for the first and final fractions, respectively; with spine matching, this was only observed in 4% (first) and 6% (final) of fractions. Carina matching provided superior nodes coverage at the end of radiation compared with spine matching (P=.0006), without compromising primary tumor coverage. Conclusion: Frequent patient setup errors occur in locally advanced lung cancer patients. Spine and carina registrations improved combined target coverage throughout the treatment course, but carina matching provided superior combined target coverage.« less
Aung, Wint Yan; Massoumzadeh, Parinaz; Najmi, Safa; Salter, Amber; Heaps, Jodi; Benzinger, Tammie L S; Mar, Soe
2018-01-01
There are no clinical features or biomarkers that can reliably differentiate acute disseminated encephalomyelitis from multiple sclerosis at the first demyelination attack. Consequently, a final diagnosis is sometimes delayed by months and years of follow-up. Early treatment for multiple sclerosis is recommended to reduce long-term disability. Therefore, we intend to explore neuroimaging biomarkers that can reliably distinguish between the two diagnoses. We reviewed prospectively collected clinical, standard MRI and diffusion tensor imaging data from 12 pediatric patients who presented with acute demyelination with and without encephalopathy. Patients were followed for an average of 6.5 years to determine the accuracy of final diagnosis. Final diagnosis was determined using 2013 International Pediatric MS Study Group criteria. Control subjects consisted of four age-matched healthy individuals for each patient. The study population consisted of six patients with central nervous system demyelination with encephalopathy with a presumed diagnosis of acute disseminated encephalomyelitis and six without encephalopathy with a presumed diagnosis of multiple sclerosis or clinically isolated syndrome at high risk for multiple sclerosis. During follow-up, two patients with initial diagnosis of acute disseminated encephalomyelitis were later diagnosed with multiple sclerosis. Diffusion tensor imaging region of interest analysis of baseline scans showed differences between final diagnosis of multiple sclerosis and acute disseminated encephalomyelitis patients, whereby low fractional anisotropy and high radial diffusivity occurred in multiple sclerosis patients compared with acute disseminated encephalomyelitis patients and the age-matched controls. Fractional anisotropy and radial diffusivity measures may have the potential to serve as biomarkers for distinguishing acute disseminated encephalomyelitis from multiple sclerosis at the onset. Copyright © 2017 Elsevier Inc. All rights reserved.
Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor
2017-05-12
Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The Spectral Imaging Toolbox can be downloaded from https://uk.mathworks.com/matlabcentral/fileexchange/62617-spectral-imaging-toolbox .
Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.
Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson
2018-02-01
Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
2013-01-01
Background Activity of disease in patients with multiple sclerosis (MS) is monitored by detecting and delineating hyper-intense lesions on MRI scans. The Minimum Area Contour Change (MACC) algorithm has been created with two main goals: a) to improve inter-operator agreement on outlining regions of interest (ROIs) and b) to automatically propagate longitudinal ROIs from the baseline scan to a follow-up scan. Methods The MACC algorithm first identifies an outer bound for the solution path, forms a high number of iso-contour curves based on equally spaced contour values, and then selects the best contour value to outline the lesion. The MACC software was tested on a set of 17 FLAIR MRI images evaluated by a pair of human experts and a longitudinal dataset of 12 pairs of T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) images that had lesion analysis ROIs drawn by a single expert operator. Results In the tests where two human experts evaluated the same MRI images, the MACC program demonstrated that it could markedly reduce inter-operator outline error. In the longitudinal part of the study, the MACC program created ROIs on follow-up scans that were in close agreement to the original expert’s ROIs. Finally, in a post-hoc analysis of 424 follow-up scans 91% of propagated MACC were accepted by an expert and only 9% of the final accepted ROIS had to be created or edited by the expert. Conclusion When used with an expert operator's verification of automatically created ROIs, MACC can be used to improve inter- operator agreement and decrease analysis time, which should improve data collected and analyzed in multicenter clinical trials. PMID:24004511
Qumseya, Bashar J; Wang, Haibo; Badie, Nicole; Uzomba, Rosemary N; Parasa, Sravanthi; White, Donna L; Wolfsen, Herbert; Sharma, Prateek; Wallace, Michael B
2013-12-01
US guidelines recommend surveillance of patients with Barrett's esophagus (BE) to detect dysplasia. BE conventionally is monitored via white-light endoscopy (WLE) and a collection of random biopsy specimens. However, this approach does not definitively or consistently detect areas of dysplasia. Advanced imaging technologies can increase the detection of dysplasia and cancer. We investigated whether these imaging technologies can increase the diagnostic yield for the detection of neoplasia in patients with BE, compared with WLE and analysis of random biopsy specimens. We performed a systematic review, using Medline and Embase, to identify relevant peer-review studies. Fourteen studies were included in the final analysis, with a total of 843 patients. Our metameter (estimate) of interest was the paired-risk difference (RD), defined as the difference in yield of the detection of dysplasia or cancer using advanced imaging vs WLE. The estimated paired-RD and 95% confidence interval (CI) were obtained using random-effects models. Heterogeneity was assessed by means of the Q statistic and the I(2) statistic. An exploratory meta-regression was performed to look for associations between the metameter and potential confounders or modifiers. Overall, advanced imaging techniques increased the diagnostic yield for detection of dysplasia or cancer by 34% (95% CI, 20%-56%; P < .0001). A subgroup analysis showed that virtual chromoendoscopy significantly increased the diagnostic yield (RD, 0.34; 95% CI, 0.14-0.56; P < .0001). The RD for chromoendoscopy was 0.35 (95% CI, 0.13-0.56; P = .0001). There was no significant difference between virtual chromoendoscopy and chromoendoscopy, based on Student t test analysis (P = .45). Based on a meta-analysis, advanced imaging techniques such as chromoendoscopy or virtual chromoendoscopy significantly increase the diagnostic yield for identification of dysplasia or cancer in patients with BE. Copyright © 2013 AGA Institute. Published by Elsevier Inc. All rights reserved.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
NASA Astrophysics Data System (ADS)
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Sabater-Galindo, Marta; Sabater-Hernández, Daniel; Ruiz de Maya, Salvador; Gastelurrutia, Miguel Angel; Martínez-Martínez, Fernando; Benrimoj, Shalom I
2017-06-01
Professional pharmaceutical services may impact on patient's health behaviour as well as influence on patients' perceptions of the pharmacist image. The Health Belief Model predicts health-related behaviours using patients' beliefs. However, health beliefs (HBs) could transcend beyond predicting health behaviour and may have an impact on the patients' perceptions of the pharmacist image. This study objective was to develop and test a model that relates patients' HBs to patient's perception of the image of the pharmacist, and to assess if the provision of pharmacy services (Intervention group-IG) influences this perception compared to usual care (Control group). A qualitative study was undertaken and a questionnaire was created for the development of the model. The content, dimensions, validity and reliability of the questionnaire were pre-tested qualitatively and in a pilot mail survey. The reliability and validity of the proposed model were tested using Confirmatory Factor Analysis (CFA). Structural Equation Modelling (SEM) was used to explain relationships between dimensions of the final model and to analyse differences between groups. As a result, a final model was developed. CFA concluded that the model was valid and reliable (Goodness of Fit indices: x²(80) = 125.726, p = .001, RMSEA = .04, SRMR = .04, GFI = .997, NFI = .93, CFI = .974). SEM indicated that 'Perceived benefits' were significantly associated with 'Perceived Pharmacist Image' in the whole sample. Differences were found in the IG with also 'Self-efficacy' significantly influencing 'Perceived pharmacist image'. A model of patients' HBs related to their image of the pharmacist was developed and tested. When pharmacists deliver professional services, these services modify some patients' HBs that in turn influence public perception of the pharmacist.
General tensor discriminant analysis and gabor features for gait recognition.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2007-10-01
The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine state of the art classification methods in gait recognition.
Connectome imaging for mapping human brain pathways
Shi, Y; Toga, A W
2017-01-01
With the fast advance of connectome imaging techniques, we have the opportunity of mapping the human brain pathways in vivo at unprecedented resolution. In this article we review the current developments of diffusion magnetic resonance imaging (MRI) for the reconstruction of anatomical pathways in connectome studies. We first introduce the background of diffusion MRI with an emphasis on the technical advances and challenges in state-of-the-art multi-shell acquisition schemes used in the Human Connectome Project. Characterization of the microstructural environment in the human brain is discussed from the tensor model to the general fiber orientation distribution (FOD) models that can resolve crossing fibers in each voxel of the image. Using FOD-based tractography, we describe novel methods for fiber bundle reconstruction and graph-based connectivity analysis. Building upon these novel developments, there have already been successful applications of connectome imaging techniques in reconstructing challenging brain pathways. Examples including retinofugal and brainstem pathways will be reviewed. Finally, we discuss future directions in connectome imaging and its interaction with other aspects of brain imaging research. PMID:28461700
Infrared/microwave (IR/MW) micromirror array beam combiner design and analysis.
Tian, Yi; Lv, Lijun; Jiang, Liwei; Wang, Xin; Li, Yanhong; Yu, Haiming; Feng, Xiaochen; Li, Qi; Zhang, Li; Li, Zhuo
2013-08-01
We investigated the design method of an infrared (IR)/microwave (MW) micromirror array type of beam combiner. The size of micromirror is in microscopic levels and comparable to MW wavelengths, so that the MW will not react in these dimensions, whereas the much shorter optical wavelengths will be reflected by them. Hence, the MW multilayered substrate was simplified and designed using transmission line theory. The beam combiner used an IR wavefront-division imaging technique to reflect the IR radiation image to the unit under test (UUT)'s pupil in a parallel light path. In addition, the boresight error detected by phase monopulse radar was analyzed using a moment-of method (MoM) and multilevel fast multipole method (MLFMM) acceleration technique. The boresight error introduced by the finite size of the beam combiner was less than 1°. Finally, in order to verify the wavefront-division imaging technique, a prototype of a micromirror array was fabricated, and IR images were tested. The IR images obtained by the thermal imager verified the correctness of the wavefront-division imaging technique.
A Dedicated Environmental Remote Sensing Facility for the Columbia Earth Institute
NASA Technical Reports Server (NTRS)
Weissel, Jeffrey K.; Small, Christopher
1999-01-01
This paper presents a final technical report on a dedicated environmental remote sensing facility for the Columbia Earth Institute. The above-referenced award enabled the Lamont-Doherty Earth Observatory to establish a state-of-the-art remote sensing image analysis and data visualization facility to serve the research and educational needs of students and staff at Lamont and the Columbia Earth Institute.
Understanding sunscreen SPF performance using cross-polarized UVA reflectance photography.
Crowther, J M
2018-04-01
Objective methods for understanding sunscreen behaviour in vitro before they are applied to the skin have failed to keep pace with the ever-increasing demands for higher SPF scores where the products are absorbing more and more similar levels of UV. A novel method for visualizing the spreading and location of SPF ingredients based on cross-polarized UVA reflectance photography is described here which gives new insights into the formation of final film morphology and how it correlates with in vivo SPF efficacy for a set of test products. High-resolution UVA-based images of sunscreen films spread onto PMMA plates were captured using a modified commercial SLR camera in a custom imaging system. Visual grading and image analysis were used to describe the overall UVA absorbance and streakiness of the resultant films, and the data compared with both in vivo and calculated in vitro SPF scores for the products. Differences were observed between the products in terms of how they spread during application. A strong correlation was observed between the evenness of the resultant film as determined from the photographs and final in vivo SPF scores. Cross-polarized UVA reflectance photography has been demonstrated to be a valuable new method for assessing sunscreen distribution after spreading and to differentiate product based on film morphology, as well as strongly correlating with final in vivo behaviour. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model
NASA Astrophysics Data System (ADS)
Lee, Myungeun; Kim, Jong Hyo
2012-02-01
Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.
Phase Diversity Applied to Sunspot Observations
NASA Astrophysics Data System (ADS)
Tritschler, A.; Schmidt, W.; Knolker, M.
We present preliminary results of a multi-colour phase diversity experiment carried out with the Multichannel Filter System of the Vacuum Tower Telescope at the Observatorio del Teide on Tenerife. We apply phase-diversity imaging to a time sequence of sunspot filtergrams taken in three continuum bands and correct the seeing influence for each image. A newly developed phase diversity device allowing for the projection of both the focused and the defocused image onto a single CCD chip was used in one of the wavelength channels. With the information about the wavefront obtained by the image reconstruction algorithm the restoration of the other two bands can be performed as well. The processed and restored data set will then be used to derive the temperature and proper motion of the umbral dots. Data analysis is still under way, and final results will be given in a forthcoming article.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Hasegawa, Shin-ya; Hirata, Ryo
2018-04-01
The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Automatic analysis and quantification of fluorescently labeled synapses in microscope images
NASA Astrophysics Data System (ADS)
Yona, Shai; Katsman, Alex; Orenbuch, Ayelet; Gitler, Daniel; Yitzhaky, Yitzhak
2011-09-01
The purpose of this work is to classify and quantify synapses and their properties in the cultures of a mouse's hippocampus, from images acquired by a fluorescent microscope. Quantification features include the number of synapses, their intensity and their size characteristics. The images obtained by the microscope contain hundreds to several thousands of synapses with various elliptic-like shape features and intensities. These images also include other features such as glia cells and other biological objects beyond the focus plane; those features reduce the visibility of the synapses and interrupt the segmentation process. The proposed method comprises several steps, including background subtraction, identification of suspected centers of synapses as local maxima of small neighborhoods, evaluation of the tendency of objects to be synapses according to intensity properties at their larger neighborhoods, classification of detected synapses into categories as bulks or single synapses and finally, delimiting the borders of each synapse.
Choi, Moon Hyung; Oh, Soon Nam; Rha, Sung Eun; Choi, Joon-Il; Lee, Sung Hak; Jang, Hong Seok; Kim, Jun-Gi; Grimm, Robert; Son, Yohan
2016-07-01
To investigate the usefulness of apparent diffusion coefficient (ADC) values derived from histogram analysis of the whole rectal cancer as a quantitative parameter to evaluate pathologic complete response (pCR) on preoperative magnetic resonance imaging (MRI). We enrolled a total of 86 consecutive patients who had undergone surgery for rectal cancer after neoadjuvant chemoradiotherapy (CRT) at our institution between July 2012 and November 2014. Two radiologists who were blinded to the final pathological results reviewed post-CRT MRI to evaluate tumor stage. Quantitative image analysis was performed using T2 -weighted and diffusion-weighted images independently by two radiologists using dedicated software that performed histogram analysis to assess the distribution of ADC in the whole tumor. After surgery, 16 patients were confirmed to have achieved pCR (18.6%). All parameters from pre- and post-CRT ADC histogram showed good or excellent agreement between two readers. The minimum, 10th, 25th, 50th, and 75th percentile and mean ADC from post-CRT ADC histogram were significantly higher in the pCR group than in the non-pCR group for both readers. The 25th percentile value from ADC histogram in post-CRT MRI had the best diagnostic performance for detecting pCR, with an area under the receiver operating characteristic curve of 0.796. Low percentile values derived from the ADC histogram analysis of rectal cancer on MRI after CRT showed a significant difference between pCR and non-pCR groups, demonstrating the utility of the ADC value as a quantitative and objective marker to evaluate complete pathologic response to preoperative CRT in rectal cancer. J. Magn. Reson. Imaging 2016;44:212-220. © 2015 Wiley Periodicals, Inc.
Wang, Wei; Heitschmidt, Gerald W; Windham, William R; Feldner, Peggy; Ni, Xinzhi; Chu, Xuan
2015-01-01
The feasibility of using a visible/near-infrared hyperspectral imaging system with a wavelength range between 400 and 1000 nm to detect and differentiate different levels of aflatoxin B1 (AFB1 ) artificially titrated on maize kernel surface was examined. To reduce the color effects of maize kernels, image analysis was limited to a subset of original spectra (600 to 1000 nm). Residual staining from the AFB1 on the kernels surface was selected as regions of interest for analysis. Principal components analysis (PCA) was applied to reduce the dimensionality of hyperspectral image data, and then a stepwise factorial discriminant analysis (FDA) was performed on latent PCA variables. The results indicated that discriminant factors F2 can be used to separate control samples from all of the other groups of kernels with AFB1 inoculated, whereas the discriminant factors F1 can be used to identify maize kernels with levels of AFB1 as low as 10 ppb. An overall classification accuracy of 98% was achieved. Finally, the peaks of β coefficients of the discrimination factors F1 and F2 were analyzed and several key wavelengths identified for differentiating maize kernels with and without AFB1 , as well as those with differing levels of AFB1 inoculation. Results indicated that Vis/NIR hyperspectral imaging technology combined with the PCA-FDA was a practical method to detect and differentiate different levels of AFB1 artificially inoculated on the maize kernels surface. However, indicated the potential to detect and differentiate naturally occurring toxins in maize kernel. © 2014 Institute of Food Technologists®
Live Cell in Vitro and in Vivo Imaging Applications: Accelerating Drug Discovery
Isherwood, Beverley; Timpson, Paul; McGhee, Ewan J; Anderson, Kurt I; Canel, Marta; Serrels, Alan; Brunton, Valerie G; Carragher, Neil O
2011-01-01
Dynamic regulation of specific molecular processes and cellular phenotypes in live cell systems reveal unique insights into cell fate and drug pharmacology that are not gained from traditional fixed endpoint assays. Recent advances in microscopic imaging platform technology combined with the development of novel optical biosensors and sophisticated image analysis solutions have increased the scope of live cell imaging applications in drug discovery. We highlight recent literature examples where live cell imaging has uncovered novel insight into biological mechanism or drug mode-of-action. We survey distinct types of optical biosensors and associated analytical methods for monitoring molecular dynamics, in vitro and in vivo. We describe the recent expansion of live cell imaging into automated target validation and drug screening activities through the development of dedicated brightfield and fluorescence kinetic imaging platforms. We provide specific examples of how temporal profiling of phenotypic response signatures using such kinetic imaging platforms can increase the value of in vitro high-content screening. Finally, we offer a prospective view of how further application and development of live cell imaging technology and reagents can accelerate preclinical lead optimization cycles and enhance the in vitro to in vivo translation of drug candidates. PMID:24310493
Difficulties Encountered by Final-Year Male Nursing Students in Their Internship Programmes
Al-Momani, Mohammed Mahmoud
2017-01-01
Background The cultural norms of the Kingdom of Saudi Arabia do not encourage men to choose nursing as a career. Understanding male nursing students’ experiences of their clinical exposure to the nursing profession throughout their internship might increase their retention. This study explored the experiences of final-year male nursing students as they transitioned to the role of registered nurse. Methods A qualitative descriptive research design with an inductive content-analysis approach was used. The experiences of 22 final-year male nursing students from three public hospitals in a major city of Saudi Arabia were explored. The data were collected using focus-group interviews and documentary analysis in March 2015 and May 2015. Results Content analysis revealed three major themes: the societal and cultural image of male nurses, male students’ engagement in nursing practice, and restructuring the internship programmes’ policies to suit male students’ needs. Conclusion The findings reveal issues that mainly stem from negative social views of nursing as a male profession. Considering the students’ social and cultural needs during their internship programme will facilitate their transition into the role of registered nurse and their retention in the nursing profession. PMID:28951687
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
[Functional magnetic resonance imaging in psychiatry and psychotherapy].
Derntl, B; Habel, U; Schneider, F
2010-01-01
technical improvements, functional magnetic resonance imaging (fMRI) has become the most popular and versatile imaging method in psychiatric research. The scope of this manuscript is to briefly introduce the basics of MR physics, the blood oxygenation level-dependent (BOLD) contrast as well as the principles of MR study design and functional data analysis. The presentation of exemplary studies on emotion recognition and empathy in schizophrenia patients will highlight the importance of MR methods in psychiatry. Finally, we will demonstrate insights into new developments that will further boost MR techniques in clinical research and will help to gain more insight into dysfunctional neural networks underlying cognitive and emotional deficits in psychiatric patients. Moreover, some techniques such as neurofeedback seem promising for evaluation of therapy effects on a behavioral and neural level.
Biological applications of phase-contrast electron microscopy.
Nagayama, Kuniaki
2014-01-01
Here, I review the principles and applications of phase-contrast electron microscopy using phase plates. First, I develop the principle of phase contrast based on a minimal model of microscopy, introducing a double Fourier-transform process to mathematically formulate the image formation. Next, I explain four phase-contrast (PC) schemes, defocus PC, Zernike PC, Hilbert differential contrast, and schlieren optics, as image-filtering processes in the context of the minimal model, with particular emphases on the Zernike PC and corresponding Zernike phase plates. Finally, I review applications of Zernike PC cryo-electron microscopy to biological systems such as protein molecules, virus particles, and cells, including single-particle analysis to delineate three-dimensional (3D) structures of protein and virus particles and cryo-electron tomography to reconstruct 3D images of complex protein systems and cells.
A computational framework to characterize and compare the geometry of coronary networks.
Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A
2017-03-01
This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aidced the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
NASA Technical Reports Server (NTRS)
Morey, W. W.
1983-01-01
The objective of the hot section viewing program is to develop a prototype optical system for viewing the interior of a gas turbine combustor during high temperature, high pressure operation in order to produce a visual record of some causes of premature hot section failures. The program began by identifying and analyzing system designs that would provide clearest images while being able to survive the hostile environment inside the combustion chamber. Different illumination methods and computer techniques for image enhancement and analysis were examined during a preliminary test phase. In the final phase of the program the prototype system was designed and fabricated and is currently being tested on a high pressure combustor rig.
Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-11-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
Nonlinear image registration with bidirectional metric and reciprocal regularization
Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng
2017-01-01
Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342
MRI of articular cartilage at microscopic resolution
Xia, Y.
2013-01-01
This review briefly summarises some of the definitive studies of articular cartilage by microscopic MRI (µMRI) that were conducted with the highest spatial resolutions. The article has four major sections. The first section introduces the cartilage tissue, MRI and µMRI, and the concept of image contrast in MRI. The second section describes the characteristic profiles of three relaxation times (T1, T2 and T1ρ) and self-diffusion in healthy articular cartilage. The third section discusses several factors that can influence the visualisation of articular cartilage and the detection of cartilage lesion by MRI and µMRI. These factors include image resolution, image analysis strategies, visualisation of the total tissue, topographical variations of the tissue properties, surface fibril ambiguity, deformation of the articular cartilage, and cartilage lesion. The final section justifies the values of multidisciplinary imaging that correlates MRI with other technical modalities, such as optical imaging. Rather than an exhaustive review to capture all activities in the literature, the studies cited in this review are merely illustrative. PMID:23610697
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Toesca, Diego; Chang, Daniel; Koong, Albert; Xing, Lei
2017-12-01
Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was DSC= 0.83 and \
Mitigating fringing in discrete frequency infrared imaging using time-delayed integration
Ran, Shihao; Berisha, Sebastian; Mankar, Rupali; Shih, Wei-Chuan; Mayerich, David
2018-01-01
Infrared (IR) spectroscopic microscopes provide the potential for label-free quantitative molecular imaging of biological samples, which can be used to aid in histology, forensics, and pharmaceutical analysis. Most IR imaging systems use broadband illumination combined with a spectrometer to separate the signal into spectral components. This technique is currently too slow for many biomedical applications such as clinical diagnosis, primarily due to the availability of bright mid-infrared sources and sensitive MCT detectors. There has been a recent push to increase throughput using coherent light sources, such as synchrotron radiation and quantum cascade lasers. While these sources provide a significant increase in intensity, the coherence introduces fringing artifacts in the final image. We demonstrate that applying time-delayed integration in one dimension can dramatically reduce fringing artifacts with minimal alterations to the standard infrared imaging pipeline. The proposed technique also offers the potential for less expensive focal plane array detectors, since linear arrays can be more readily incorporated into the proposed framework. PMID:29552416
NASA Astrophysics Data System (ADS)
Fu, Z.; Qin, Q.; Wu, C.; Chang, Y.; Luo, B.
2017-09-01
Due to the differences of imaging principles, image matching between visible and thermal infrared images still exist new challenges and difficulties. Inspired by the complementary spatial and frequency information of geometric structural features, a robust descriptor is proposed for visible and thermal infrared images matching. We first divide two different spatial regions to the region around point of interest, using the histogram of oriented magnitudes, which corresponds to the 2-D structural shape information to describe the larger region and the edge oriented histogram to describe the spatial distribution for the smaller region. Then the two vectors are normalized and combined to a higher feature vector. Finally, our proposed descriptor is obtained by applying principal component analysis (PCA) to reduce the dimension of the combined high feature vector to make our descriptor more robust. Experimental results showed that our proposed method was provided with significant improvements in correct matching numbers and obvious advantages by complementing information within spatial and frequency structural information.
Sentinel-2 ArcGIS Tool for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Plesoianu, Alin; Cosmin Sandric, Ionut; Anca, Paula; Vasile, Alexandru; Calugaru, Andreea; Vasile, Cristian; Zavate, Lucian
2017-04-01
This paper addresses one of the biggest challenges regarding Sentinel-2 data, related to the need of an efficient tool to access and process the large collection of images that are available. Consequently, developing a tool for the automation of Sentinel-2 data analysis is the most immediate need. We developed a series of tools for the automation of Sentinel-2 data download and processing for vegetation health monitoring. The tools automatically perform the following operations: downloading image tiles from ESA's Scientific Hub or other venders (Amazon), pre-processing of the images to extract the 10-m bands, creating image composites, applying a series of vegetation indexes (NDVI, OSAVI, etc.) and performing change detection analyses on different temporal data sets. All of these tools run in a dynamic way in the ArcGIS Platform, without the need of creating intermediate datasets (rasters, layers), as the images are processed on-the-fly in order to avoid data duplication. Finally, they allow complete integration with the ArcGIS environment and workflows
Vehicle license plate recognition in dense fog based on improved atmospheric scattering model
NASA Astrophysics Data System (ADS)
Tang, Chunming; Lin, Jun; Chen, Chunkai; Dong, Yancheng
2018-04-01
An effective method based on improved atmospheric scattering model is proposed in this paper to handle the problem of the vehicle license plate location and recognition in dense fog. Dense fog detection is performed firstly by the top-hat transformation and the vertical edge detection, and the moving vehicle image is separated from the traffic video image. After the vehicle image is decomposed into two layers: structure and texture layers, the glow layer is separated from the structure layer to get the background layer. Followed by performing the mean-pooling and the bicubic interpolation algorithm, the atmospheric light map of the background layer can be predicted, meanwhile the transmission of the background layer is estimated through the grayed glow layer, whose gray value is altered by linear mapping. Then, according to the improved atmospheric scattering model, the final restored image can be obtained by fusing the restored background layer and the optimized texture layer. License plate location is performed secondly by a series of morphological operations, connected domain analysis and various validations. Characters extraction is achieved according to the projection. Finally, an offline trained pattern classifier of hybrid discriminative restricted boltzmann machines (HDRBM) is applied to recognize the characters. Experimental results on thorough data sets are reported to demonstrate that the proposed method can achieve high recognition accuracy and works robustly in the dense fog traffic environment during 24h or one day.
Using the auxiliary camera for system calibration of 3D measurement by digital speckle
NASA Astrophysics Data System (ADS)
Xue, Junpeng; Su, Xianyu; Zhang, Qican
2014-06-01
The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.
Dynamics and Morphology of Saturn’s North Polar Region During Cassini’s Final Year
NASA Astrophysics Data System (ADS)
Blalock, John J.; Sayanagi, Kunio M.; Ingersoll, Andrew P.; Dyudina, Ulyana A.; Ewald, Shawn; McCabe, Ryan M.; Gunnarson, Jacob; Garland, Justin; Gallego, Angelina
2017-10-01
We present an analysis of Saturn’s north polar region utilizing Cassini ISS images captured in visible and near-infrared wavelengths during late 2016 and 2017, including images captured during Cassini’s Grand Finale orbits. To measure the wind field in the region, we utilize the two-dimensional correlation imaging velocimetry (CIV) technique. We also calculate the relative vorticity and divergence from the wind field. To detect changes in the dynamics, we compare measurements of the wind, relative vorticity, and divergence in 2012 and 2013 with those from 2016/2017. We also compare cloud reflectivity between 2012/2013 and 2016/2017 in images that show the north pole under similar illumination conditions. To detect changes in cloud reflectivity, we utilize a Minnaert correction to calculate the zonal mean reflectivity as a function of latitude. Furthermore, we compare the winds and cloud reflectivity at several wavelengths in order to look for changes occurring at different altitudes. Our results indicate that while the dynamics of the north polar region have remained relatively stable, there have been significant morphology changes that have resulted in dramatic color changes. We hypothesize that these changes are a result of the seasonal cycle and linked to the increased production of photochemical hazes in the atmosphere. Our work has been supported by NASA PATM NNX14AK07G, NSF AAG 1212216, and NASA NESSF NNX15AQ70H.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Pandžić, Elvis; Abu-Arish, Asmahan; Whan, Renee M; Hanrahan, John W; Wiseman, Paul W
2018-02-16
Molecular, vesicular and organellar flows are of fundamental importance for the delivery of nutrients and essential components used in cellular functions such as motility and division. With recent advances in fluorescence/super-resolution microscopy modalities we can resolve the movements of these objects at higher spatio-temporal resolutions and with better sensitivity. Previously, spatio-temporal image correlation spectroscopy has been applied to map molecular flows by correlation analysis of fluorescence fluctuations in image series. However, an underlying assumption of this approach is that the sampled time windows contain one dominant flowing component. Although this was true for most of the cases analyzed earlier, in some situations two or more different flowing populations can be present in the same spatio-temporal window. We introduce an approach, termed velocity landscape correlation (VLC), which detects and extracts multiple flow components present in a sampled image region via an extension of the correlation analysis of fluorescence intensity fluctuations. First we demonstrate theoretically how this approach works, test the performance of the method with a range of computer simulated image series with varying flow dynamics. Finally we apply VLC to study variable fluxing of STIM1 proteins on microtubules connected to the plasma membrane of Cystic Fibrosis Bronchial Epithelial (CFBE) cells. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun
2018-03-01
In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.
NASA/ASEE Summer Faculty Fellowship Program, 1990, Volume 1
NASA Technical Reports Server (NTRS)
Bannerot, Richard B. (Editor); Goldstein, Stanley H. (Editor)
1990-01-01
The 1990 Johnson Space Center (JSC) NASA/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program was conducted by the University of Houston-University Park and JSC. A compilation of the final reports on the research projects are presented. The topics covered include: the Space Station; the Space Shuttle; exobiology; cell biology; culture techniques; control systems design; laser induced fluorescence; spacecraft reliability analysis; reduced gravity; biotechnology; microgravity applications; regenerative life support systems; imaging techniques; cardiovascular system; physiological effects; extravehicular mobility units; mathematical models; bioreactors; computerized simulation; microgravity simulation; and dynamic structural analysis.
A new user-friendly visual environment for breast MRI data analysis.
Antonios, Danelakis; Dimitrios, Verganelakis A; Theoharis, Theoharis
2013-06-01
In this paper a novel, user friendly visual environment for Breast MRI Data Analysis is presented (BreDAn). Given planar MRI images before and after IV contrast medium injection, BreDAn generates kinematic graphs, color maps of signal increase and decrease and finally detects high risk breast areas. The advantage of BreDAn, which has been validated and tested successfully, is the automation of the radiodiagnostic process in an accurate and reliable manner. It can potentially facilitate radiologists' workload. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Space and Earth Science Data Compression Workshop
NASA Technical Reports Server (NTRS)
Tilton, James C. (Editor)
1991-01-01
The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The focus was on scientists' data requirements, as well as constraints imposed by the data collection, transmission, distribution, and archival systems. The workshop consisted of several invited papers; two described information systems for space and Earth science data, four depicted analysis scenarios for extracting information of scientific interest from data collected by Earth orbiting and deep space platforms, and a final one was a general tutorial on image data compression.
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
Resolution analysis of archive films for the purpose of their optimal digitization and distribution
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2017-09-01
With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.
Characterizing pigments with hyperspectral imaging variable false-color composites
NASA Astrophysics Data System (ADS)
Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy
2015-11-01
Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.
Automatic multiresolution age-related macular degeneration detection from fundus images
NASA Astrophysics Data System (ADS)
Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida
2014-03-01
Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.
Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John
2015-11-01
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.
Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility
NASA Astrophysics Data System (ADS)
Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.
2017-12-01
The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.
Reyes, D R; Halter, M; Hwang, J
2015-07-01
The characterization of internal structures in a polymeric microfluidic device, especially of a final product, will require a different set of optical metrology tools than those traditionally used for microelectronic devices. We demonstrate that optical coherence tomography (OCT) imaging is a promising technique to characterize the internal structures of poly(methyl methacrylate) devices where the subsurface structures often cannot be imaged by conventional wide field optical microscopy. The structural details of channels in the devices were imaged with OCT and analyzed with an in-house written ImageJ macro in an effort to identify the structural details of the channel. The dimensional values obtained with OCT were compared with laser-scanning confocal microscopy images of channels filled with a fluorophore solution. Attempts were also made using confocal reflectance and interferometry microscopy to measure the channel dimensions, but artefacts present in the images precluded quantitative analysis. OCT provided the most accurate estimates for the channel height based on an analysis of optical micrographs obtained after destructively slicing the channel with a microtome. OCT may be a promising technique for the future of three-dimensional metrology of critical internal structures in lab-on-a-chip devices because scans can be performed rapidly and noninvasively prior to their use. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Learning deep similarity in fundus photography
NASA Astrophysics Data System (ADS)
Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew
2017-02-01
Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Geiser, William; Heintz, Philip
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist ismore » responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.« less
Choi, Woo June; Pepple, Kathryn L; Wang, Ruikang K
2018-05-24
In preclinical vision research, cell grading in small animal models is essential for the quantitative evaluation of intraocular inflammation. Here, we present a new and practical optical coherence tomography (OCT) image analysis method for the automated detection and counting of aqueous cells in the anterior chamber (AC) of a rodent model of uveitis. Anterior segment OCT (AS-OCT) images are acquired with a 100kHz swept-source OCT (SS-OCT) system. The proposed method consists of two steps. In the first step, we first despeckle and binarize each OCT image. After removing AS structures in the binary image, we then apply area thresholding to isolate cell-like objects. Potential cell candidates are selected based on their best fit to roundness. The second step performs the cell counting within the whole AC, in which additional cell tracking analysis is conducted on the successive OCT images to eliminate redundancy in cell counting. Finally, 3-D cell grading using the proposed method is demonstrated in longitudinal OCT imaging of a mouse model of anterior uveitis in vivo. Rendering of anterior segment (orange) of mouse eye and automatically counted anterior chamber cells (green). Inset is a top view of the rendering, showing the cell distribution across the anterior chamber. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Ranjit, Suman; Dobrinskikh, Evgenia; Montford, John; Dvornikov, Alexander; Lehman, Allison; Orlicky, David J.; Nemenoff, Raphael; Gratton, Enrico; Levi, Moshe; Furgeson, Seth
2017-01-01
All forms of progressive renal diseases develop a final pathway of tubulointerstitial fibrosis and glomerulosclerosis. Renal fibrosis is usually quantified using histological staining, a process that is time-consuming and pathologist dependent. The work described here shows the development of a fast and operator-independent method to measure fibrosis. To study renal fibrosis, the unilateral ureteral obstruction (UUO) model was chosen. Mice develop a time-dependent increase in obstructed kidneys; contralateral kidneys are used as controls. After UUO, kidneys were analyzed at three time points: 7 days, 14 days, and 21 days. Fibrosis was investigated using FLIM (Fluorescence Lifetime Imaging) and SHG (Second Harmonic Generation) in the deep tissue imaging microscope called DIVER (Deep Imaging via Enhanced photon Recovery). This microscope was developed for deep tissue and SHG and THG (Third Harmonic Generation) imaging and has extraordinary sensitivity towards harmonic generation. SHG data suggests the presence of more fibrillar collagen in the diseased kidneys. The combinations of short wavelength FLIM and SHG analysis results in a robust analysis procedure independent of observer interpretation and let us create a criterion to quantify the extent of fibrosis directly from the image. The progression of fibrosis in UUO model has been studied using this new FLIM-SHG technique and it shows remarkable improvement in quantification of fibrosis compared to standard histological techniques. PMID:27555119
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
(Machine-)Learning to analyze in vivo microscopy: Support vector machines.
Wang, Michael F Z; Fernandez-Gonzalez, Rodrigo
2017-11-01
The development of new microscopy techniques for super-resolved, long-term monitoring of cellular and subcellular dynamics in living organisms is revealing new fundamental aspects of tissue development and repair. However, new microscopy approaches present several challenges. In addition to unprecedented requirements for data storage, the analysis of high resolution, time-lapse images is too complex to be done manually. Machine learning techniques are ideally suited for the (semi-)automated analysis of multidimensional image data. In particular, support vector machines (SVMs), have emerged as an efficient method to analyze microscopy images obtained from animals. Here, we discuss the use of SVMs to analyze in vivo microscopy data. We introduce the mathematical framework behind SVMs, and we describe the metrics used by SVMs and other machine learning approaches to classify image data. We discuss the influence of different SVM parameters in the context of an algorithm for cell segmentation and tracking. Finally, we describe how the application of SVMs has been critical to study protein localization in yeast screens, for lineage tracing in C. elegans, or to determine the developmental stage of Drosophila embryos to investigate gene expression dynamics. We propose that SVMs will become central tools in the analysis of the complex image data that novel microscopy modalities have made possible. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D
2012-07-01
An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Ceco, Ema; Önnerud, Hans; Menning, Dennis; Gilljam, John L.; Bââth, Petra; Östmark, Henric
2014-05-01
The following paper presents a realistic forensic capability test of an imaging Raman spectroscopy based demonstrator system, developed at FOI, the Swedish Defence Research Agency. The system uses a 532 nm laser to irradiate a surface of 25×25mm. The backscattered radiation from the surface is collected by an 8" telescope with subsequent optical system, and is finally imaged onto an ICCD camera. We present here an explosives trace analysis study of samples collected from a realistic scenario after a detonation. A left-behind 5 kg IED, based on ammonium nitrate with a TNT (2,4,6-trinitrotoluene) booster, was detonated in a plastic garbage bin. Aluminum sample plates were mounted vertically on a holder approximately 6 m from the point of detonation. Minutes after the detonation, the samples were analyzed with stand-off imaging Raman spectroscopy from a distance of 10 m. Trace amounts could be detected from the secondary explosive (ammonium nitrate with an analysis time of 1 min. Measurement results also indicated detection of residues from the booster (TNT). The sample plates were subsequently swabbed and analyzed with HPLC and GC-MS analyses to confirm the results from the stand-off imaging Raman system. The presented findings indicate that it is possible to determine the type of explosive used in an IED from a distance, within minutes after the attack, and without tampering with physical evidence at the crime scene.
Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images
NASA Astrophysics Data System (ADS)
Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel
2016-02-01
Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Yang, Yongkun; Han, Gaohua; Xu, Wansong
2016-01-01
This study was designated to assess the diagnostic value of 99 Tc M -2-(2-methyl-5-nitro-1 H -imidazol-1-yl) ethyl dihydrogen phosphate ( 99 Tc M -MNLS) hypoxia imaging and its evaluation performance for radiotherapy efficacy in patients with non-small-cell lung cancer (NSCLC). A total of 61 patients with NSCLC were selected for this study. All patients were injected with 99 Tc M -MNLS within 1 week prior to radiotherapy and they were injected with 99 Tc M -MNLS again 3 months after radiotherapy. Qualitative analysis along with semiquantitative analysis results were obtained from hypoxia imaging. Meanwhile, the effect of radiotherapy on patients with NSCLC was evaluated based on the solid tumor curative effect evaluation standard. Finally, SPSS 19.0 statistical software was implemented for statistical analysis. There was no significant difference in age or sex between the NSCLC patient group and benign patient group ( P >0.05). 99 Tc M -MNLS was selectively concentrated in tumor tissues with a clear imaging in 24 hours. Results from both qualitative analysis and semiquantitative analysis indicated that the sensitivity and specificity of 99 Tc M -MNLS hypoxia imaging in diagnosing NSCLC were 93.8% and 84.6% and 72.9% and 100%, respectively. Moreover, the receiver operating characteristic curve provided evidence that 99 Tc M -MNLS hypoxia imaging was a powerful diagnostic tool in distinguishing malignant lung cancer from benign lesions. As suggested by 24-hour imaging, the tumor-to-normal ratio of patients in the 99 Tc M -MNLS high-intake group and low-intake group had a decline of 24.7% and 14.4% after radiotherapy, respectively. The decline in the tumor-to-normal ratio between the two dosage groups was significantly different ( P <0.05). 99 Tc M -MNLS hypoxia imaging had reliable values in both diagnosing NSCLC and evaluating therapeutic effects of radiotherapy on patients with NSCLC.
Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images
NASA Astrophysics Data System (ADS)
Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.
2012-08-01
A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Material Identification and Quantification in Spectral X-ray Micro-CT
NASA Astrophysics Data System (ADS)
Holmes, Thomas Wesley
The identification and quantification of all the voxels within a reconstructed microCT image was possible through making comparisons of the attenuation profile from an unknown voxel with precalculated signatures of known materials. This was accomplished through simulations with the MCNP6 general-purpose radiation-transport package that modeled a CdTe detector array consisting of 200 elements which were able to differentiate between 100 separate energy bins over the entire range of the emitted 110 kVp tungsten x-ray spectra. The information from each of the separate energy bins was then used to create a single reconstructed image that was then grouped back together to produce a final image where each voxel had a corresponding attenuation pro le. A library of known attenuation profiles was created for each of the materials expected to be within an object with otherwise unknown parameters. A least squares analysis was performed, and comparisons were then made for each voxel's attenuation profile in the unknown object and combinations of each possible library combination of attenuation profiles. Based on predetermined thresholds that the results must meet, some of the combinations were then removed. Of the remaining combinations, a voting system based on statistical evaluations of the fits was designed to select the most appropriate material combination to the input unknown voxel. This was performed over all of the voxels in the reconstructed image and a final resulting material map was produced. These material locations were then quantified by creating an equation of the response from several different densities of the same material and recording the response of the base library. This entire process was called the All Combinations Library Least Squares (ACLLS)analysis and was used to test several Different models. These models investigated a range of densities for the x-ray contrast agents of gold and gadolinium that can be used in many medical applications, as well as a range of densities of bone to test the ACLLS ability to be used with bone density estimation. A final test used a model with five different materials present within the object and consisted of two separate features with mixtures of three materials as gold, iodine and water, and another feature with gadolinium, iodine and water. The remaining four features were all mixtures of water with bone, gold, gadolinium, and iodine. All of the various material mixtures were successfully identified and quantified using the ACLLS analysis package within an acceptable statistical range. The ACLLS method has proven itself as a viable analysis tool for determining both the physical locations and the amount of all the materials present within a given object. This tool could be implemented in the future so as to further assist a team of medical practitioners in diagnosing a subject through reducing ambiguities in an image and providing a quantifiable solution to all of the voxels.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Pakpour, Amir H; Zeidi, Isa Mohammadi; Ziaeiha, Masoumeh; Burri, Andrea
2014-01-01
The aim of the present study was to investigate the psychometric properties of a translated and culturally adapted Iranian version of the Female Genital Self-Image Scale (FGSIS-I) in a sample of college women. Further, the relationship between women's self-image, body appreciation, sexual functioning, and gynecological exam behavior was explored. A sample of 1,877 female students from five different universities across Qazvin and Tehran completed the Female Sexual Function Index (FSFI), the Body Appreciation Scale (BAS), the Rosenberg Self-Esteem Scale (RSES), the FGSIS-I, and a gynecological exam behavior questionnaire. Good to excellent internal consistency reliability, test-retest reliability, and convergent and construct validity were found. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) both provided a two-factor structure for the FGSIS-I. The validity of the FGSIS-I in predicting gynecological exam behavior of college women was tested using structural equation modeling (SEM). The final model accounted for 33% of the variance in gynecological exam behavior (p < 0.01). In conclusion, the FGSIS-I was found to be a highly valid and reliable instrument to assess female genital self-image in Iranian women.
Algorithms and programming tools for image processing on the MPP:3
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
NASA Astrophysics Data System (ADS)
Zhao, Yan-Ru; Yu, Ke-Qiang; Li, Xiaoli; He, Yong
2016-12-01
Infected petals are often regarded as the source for the spread of fungi Sclerotinia sclerotiorum in all growing process of rapeseed (Brassica napus L.) plants. This research aimed to detect fungal infection of rapeseed petals by applying hyperspectral imaging in the spectral region of 874-1734 nm coupled with chemometrics. Reflectance was extracted from regions of interest (ROIs) in the hyperspectral image of each sample. Firstly, principal component analysis (PCA) was applied to conduct a cluster analysis with the first several principal components (PCs). Then, two methods including X-loadings of PCA and random frog (RF) algorithm were used and compared for optimizing wavebands selection. Least squares-support vector machine (LS-SVM) methodology was employed to establish discriminative models based on the optimal and full wavebands. Finally, area under the receiver operating characteristics curve (AUC) was utilized to evaluate classification performance of these LS-SVM models. It was found that LS-SVM based on the combination of all optimal wavebands had the best performance with AUC of 0.929. These results were promising and demonstrated the potential of applying hyperspectral imaging in fungus infection detection on rapeseed petals.
Automatic thermographic image defect detection of composites
NASA Astrophysics Data System (ADS)
Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP
2011-05-01
Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.
Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays
Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.
2016-01-01
ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2014-03-01
We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.
Speckle noise reduction in SAR images ship detection
NASA Astrophysics Data System (ADS)
Yuan, Ji; Wu, Bin; Yuan, Yuan; Huang, Qingqing; Chen, Jingbo; Ren, Lin
2012-09-01
At present, there are two types of method to detect ships in SAR images. One is a direct detection type, detecting ships directly. The other is an indirect detection type. That is, it firstly detects ship wakes, and then seeks ships around wakes. The two types all effect by speckle noise. In order to improve the accuracy of ship detection and get accurate ship and ship wakes parameters, such as ship length, ship width, ship area, the angle of ship wakes and ship outline from SAR images, it is extremely necessary to remove speckle noise in SAR images before data used in various SAR images ship detection. The use of speckle noise reduction filter depends on the specification for a particular application. Some common filters are widely used in speckle noise reduction, such as the mean filter, the median filter, the lee filter, the enhanced lee filter, the Kuan filter, the frost filter, the enhanced frost filter and gamma filter, but these filters represent some disadvantages in SAR image ship detection because of the various types of ship. Therefore, a mathematical function known as the wavelet transform and multi-resolution analysis were used to localize an SAR ocean image into different frequency components or useful subbands, and effectively reduce the speckle in the subbands according to the local statistics within the bands. Finally, the analysis of the statistical results are presented, which demonstrates the advantages and disadvantages of using wavelet shrinkage techniques over standard speckle filters.
Hyperspectral image segmentation using a cooperative nonparametric approach
NASA Astrophysics Data System (ADS)
Taher, Akar; Chehdi, Kacem; Cariou, Claude
2013-10-01
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooney, K; Zhao, T; Green, O
Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less
2015-07-08
This single frame from a four-frame movie shows New Horizons' final deep search for hazardous material around Pluto, obtained on July 1, 2015. These data allow a highly sensitive search for any new moons. The images were taken with the spacecraft's Long Range Reconnaissance Imager (LORRI) over a 100-minute period, and were the final observations in the series of dedicated searches for hazards in the Pluto system which began on May 11. The images show all five known satellites of Pluto moving in their orbits around the dwarf planet, but analysis of these data has so far not revealed the existence of any additional moons. This means that any undiscovered Plutonian moons further than a few thousand miles from Pluto must be smaller than about 1 mile (1.6 kilometers) in diameter, if their surfaces have similar brightness to Pluto's big moon Charon. For comparison, Pluto's faintest known moon, Styx, which is conspicuous in the lower left quadrant of these images, is about 4 miles (7 kilometers) across, assuming the same surface brightness. The absence of additional moons, and also the absence of detectable rings in the hazard search data, imply that the spacecraft is very unlikely to be damaged by collisions with rings, or dust particles ejected from moons, during its high-speed passage through the Pluto system. The four movie frames were taken at 16:28, 16:38, 17:52, and 18:04 UTC on July 1, from a range of 9.4 million miles (15.2 million kilometers). Each frame is a mosaic of four sets of overlapping images, with a total exposure time of 120 seconds. The images have been heavily processed to remove the glare of Pluto and Charon, and the dense background of stars, though blemishes remain at the locations of many of the brighter stars. The "tails" extending to the right or downward from Pluto and Charon are camera artifacts caused by the extreme overexposure of both objects. Pluto and its five moons Charon, Styx, Nix, Kerberos and Hydra are identified by their initials, and their orbits around the center of gravity of the system (which is located just outside Pluto itself) are also shown. http://photojournal.jpl.nasa.gov/catalog/PIA19701
NASA Technical Reports Server (NTRS)
Giddings, L.; Boston, S.
1976-01-01
A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.
[Identification of green tea brand based on hyperspectra imaging technology].
Zhang, Hai-Liang; Liu, Xiao-Li; Zhu, Feng-Le; He, Yong
2014-05-01
Hyperspectral imaging technology was developed to identify different brand famous green tea based on PCA information and image information fusion. First 512 spectral images of six brands of famous green tea in the 380 approximately 1 023 nm wavelength range were collected and principal component analysis (PCA) was performed with the goal of selecting two characteristic bands (545 and 611 nm) that could potentially be used for classification system. Then, 12 gray level co-occurrence matrix (GLCM) features (i. e., mean, covariance, homogeneity, energy, contrast, correlation, entropy, inverse gap, contrast, difference from the second-order and autocorrelation) based on the statistical moment were extracted from each characteristic band image. Finally, integration of the 12 texture features and three PCA spectral characteristics for each green tea sample were extracted as the input of LS-SVM. Experimental results showed that discriminating rate was 100% in the prediction set. The receiver operating characteristic curve (ROC) assessment methods were used to evaluate the LS-SVM classification algorithm. Overall results sufficiently demonstrate that hyperspectral imaging technology can be used to perform classification of green tea.
NASA Astrophysics Data System (ADS)
Berkels, Benjamin; Wirth, Benedikt
2017-09-01
Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Emerging Themes in Image Informatics and Molecular Analysis for Digital Pathology.
Bhargava, Rohit; Madabhushi, Anant
2016-07-11
Pathology is essential for research in disease and development, as well as for clinical decision making. For more than 100 years, pathology practice has involved analyzing images of stained, thin tissue sections by a trained human using an optical microscope. Technological advances are now driving major changes in this paradigm toward digital pathology (DP). The digital transformation of pathology goes beyond recording, archiving, and retrieving images, providing new computational tools to inform better decision making for precision medicine. First, we discuss some emerging innovations in both computational image analytics and imaging instrumentation in DP. Second, we discuss molecular contrast in pathology. Molecular DP has traditionally been an extension of pathology with molecularly specific dyes. Label-free, spectroscopic images are rapidly emerging as another important information source, and we describe the benefits and potential of this evolution. Third, we describe multimodal DP, which is enabled by computational algorithms and combines the best characteristics of structural and molecular pathology. Finally, we provide examples of application areas in telepathology, education, and precision medicine. We conclude by discussing challenges and emerging opportunities in this area.
Emerging Themes in Image Informatics and Molecular Analysis for Digital Pathology
Bhargava, Rohit; Madabhushi, Anant
2017-01-01
Pathology is essential for research in disease and development, as well as for clinical decision making. For more than 100 years, pathology practice has involved analyzing images of stained, thin tissue sections by a trained human using an optical microscope. Technological advances are now driving major changes in this paradigm toward digital pathology (DP). The digital transformation of pathology goes beyond recording, archiving, and retrieving images, providing new computational tools to inform better decision making for precision medicine. First, we discuss some emerging innovations in both computational image analytics and imaging instrumentation in DP. Second, we discuss molecular contrast in pathology. Molecular DP has traditionally been an extension of pathology with molecularly specific dyes. Label-free, spectroscopic images are rapidly emerging as another important information source, and we describe the benefits and potential of this evolution. Third, we describe multimodal DP, which is enabled by computational algorithms and combines the best characteristics of structural and molecular pathology. Finally, we provide examples of application areas in telepathology, education, and precision medicine. We conclude by discussing challenges and emerging opportunities in this area. PMID:27420575
Ship Detection in Optical Satellite Image Based on RX Method and PCAnet
NASA Astrophysics Data System (ADS)
Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting
2017-12-01
In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.
Molecular constituents of colorectal cancer metastatic to the liver by imaging infrared spectroscopy
NASA Astrophysics Data System (ADS)
Coe, James V.; Chen, Zhaomin; Li, Ran; Nystrom, Steven V.; Butke, Ryan; Miller, Barrie; Hitchcock, Charles L.; Allen, Heather C.; Povoski, Stephen P.; Martin, Edward W.
2015-03-01
Infrared (IR) imaging spectroscopy of human liver tissue slices has been used to identify and characterize liver metastasis of colorectal origin which was surgically removed from a consenting patient and frozen without formalin fixation or dehydration procedures, so that lipids and water remain in the tissues. First, a k-means clustering analysis, using metrics from the IR spectra, identified groups within the image. The groups were identified as tumor or nontumor regions by comparing to an H and E stain of the same sample after IR imaging. Then, calibrant IR spectra of protein, several fats, glycogen, and polyvinyl alcohol were isolated by differencing spectra from different regions or groups in the image space. Finally, inner products (or scores) of the IR spectra at each pixel in the image with each of the various calibrants were calculated showing how the calibrant molecules vary in tumor and nontumor regions. In this particular case, glycogen and protein changes enable separation of tumor and nontumor regions as shown with a contour plot of the glycogen scores versus the protein scores.
A symmetrical image encryption scheme in wavelet and time domain
NASA Astrophysics Data System (ADS)
Luo, Yuling; Du, Minghui; Liu, Junxiu
2015-02-01
There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.
2004-02-04
KENNEDY SPACE CENTER, FLA. - Reporters are eager to hear from Armando Oliu about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu, Final Inspection Team lead for the Shuttle program, oversees the lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
NASA Astrophysics Data System (ADS)
Argyropoulou, Evangelia
2015-04-01
The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.
NASA Astrophysics Data System (ADS)
Kuehnel, C.; Hennemuth, A.; Oeltze, S.; Boskamp, T.; Peitgen, H.-O.
2008-03-01
The diagnosis support in the field of coronary artery disease (CAD) is very complex due to the numerous symptoms and performed studies leading to the final diagnosis. CTA and MRI are on their way to replace invasive catheter angiography. Thus, there is a need for sophisticated software tools that present the different analysis results, and correlate the anatomical and dynamic image information. We introduce a new software assistant for the combined result visualization of CTA and MR images, in which a dedicated concept for the structured presentation of original data, segmentation results, and individual findings is realized. Therefore, we define a comprehensive class hierarchy and assign suitable interaction functions. User guidance is coupled as closely as possible with available data, supporting a straightforward workflow design. The analysis results are extracted from two previously developed software assistants, providing coronary artery analysis and measurements, function analysis as well as late enhancement data investigation. As an extension we introduce a finding concept directly relating suspicious positions to the underlying data. An affine registration of CT and MR data in combination with the AHA 17-segment model enables the coupling of local findings to positions in all data sets. Furthermore, sophisticated visualization in 2D and 3D and interactive bull's eye plots facilitate a correlation of coronary stenoses and physiology. The software has been evaluated on 20 patient data sets.
Thermal imaging for cold air flow visualisation and analysis
NASA Astrophysics Data System (ADS)
Grudzielanek, M.; Pflitsch, A.; Cermak, J.
2012-04-01
In this work we present first applications of a thermal imaging system for animated visualization and analysis of cold air flow in field studies. The development of mobile thermal imaging systems advanced very fast in the last decades. The surface temperature of objects, which is detected with long-wave infrared radiation, affords conclusions in different problems of research. Modern thermal imaging systems allow infrared picture-sequences and a following data analysis; the systems are not exclusive imaging methods like in the past. Thus, the monitoring and analysing of dynamic processes became possible. We measured the cold air flow on a sloping grassland area with standard methods (sonic anemometers and temperature loggers) plus a thermal imaging system measuring in the range from 7.5 to 14µm. To analyse the cold air with the thermal measurements, we collected the surface infrared temperatures at a projection screen, which was located in cold air flow direction, opposite the infrared (IR) camera. The intention of using a thermal imaging system for our work was: 1. to get a general idea of practicability in our problem, 2. to assess the value of the extensive and more detailed data sets and 3. to optimise visualisation. The results were very promising. Through the possibility of generating time-lapse movies of the image sequences in time scaling, processes of cold air flow, like flow waves, turbulence and general flow speed, can be directly identified. Vertical temperature gradients and near-ground inversions can be visualised very well. Time-lapse movies will be presented. The extensive data collection permits a higher spatial resolution of the data than standard methods, so that cold air flow attributes can be explored in much more detail. Time series are extracted from the IR data series, analysed statistically, and compared to data obtained using traditional systems. Finally, we assess the usefulness of the additional measurement of cold air flow with thermal imaging systems.
Guided color consistency optimization for image mosaicking
NASA Astrophysics Data System (ADS)
Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li
2018-01-01
This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Chen, Q. G.; Zhu, H. H.; Xu, Y.; Lin, B.; Chen, H.
2015-08-01
A quantitative method to discriminate caries lesions for a fluorescence imaging system is proposed in this paper. The autofluorescence spectral investigation of 39 teeth samples classified by the International Caries Detection and Assessment System levels was performed at 405 nm excitation. The major differences in the different caries lesions focused on the relative spectral intensity range of 565-750 nm. The spectral parameter, defined as the ratio of wavebands at 565-750 nm to the whole spectral range, was calculated. The image component ratio R/(G + B) of color components was statistically computed by considering the spectral parameters (e.g. autofluorescence, optical filter, and spectral sensitivity) in our fluorescence color imaging system. Results showed that the spectral parameter and image component ratio presented a linear relation. Therefore, the image component ratio was graded as <0.66, 0.66-1.06, 1.06-1.62, and >1.62 to quantitatively classify sound, early decay, established decay, and severe decay tissues, respectively. Finally, the fluorescence images of caries were experimentally obtained, and the corresponding image component ratio distribution was compared with the classification result. A method to determine the numerical grades of caries using a fluorescence imaging system was proposed. This method can be applied to similar imaging systems.
Delakis, Ioannis; Hammad, Omer; Kitney, Richard I
2007-07-07
Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.
Computer-aided diagnosis of early knee osteoarthritis based on MRI T2 mapping.
Wu, Yixiao; Yang, Ran; Jia, Sen; Li, Zhanjun; Zhou, Zhiyang; Lou, Ting
2014-01-01
This work was aimed at studying the method of computer-aided diagnosis of early knee OA (OA: osteoarthritis). Based on the technique of MRI (MRI: Magnetic Resonance Imaging) T2 Mapping, through computer image processing, feature extraction, calculation and analysis via constructing a classifier, an effective computer-aided diagnosis method for knee OA was created to assist doctors in their accurate, timely and convenient detection of potential risk of OA. In order to evaluate this method, a total of 1380 data from the MRI images of 46 samples of knee joints were collected. These data were then modeled through linear regression on an offline general platform by the use of the ImageJ software, and a map of the physical parameter T2 was reconstructed. After the image processing, the T2 values of ten regions in the WORMS (WORMS: Whole-organ Magnetic Resonance Imaging Score) areas of the articular cartilage were extracted to be used as the eigenvalues in data mining. Then,a RBF (RBF: Radical Basis Function) network classifier was built to classify and identify the collected data. The classifier exhibited a final identification accuracy of 75%, indicating a good result of assisting diagnosis. Since the knee OA classifier constituted by a weights-directly-determined RBF neural network didn't require any iteration, our results demonstrated that the optimal weights, appropriate center and variance could be yielded through simple procedures. Furthermore, the accuracy for both the training samples and the testing samples from the normal group could reach 100%. Finally, the classifier was superior both in time efficiency and classification performance to the frequently used classifiers based on iterative learning. Thus it was suitable to be used as an aid to computer-aided diagnosis of early knee OA.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana
2004-05-01
Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be detected accurately. This will be an important step towards automatic multiple image analysis for CAD techniques.
Rexhepaj, Elton; Brennan, Donal J; Holloway, Peter; Kay, Elaine W; McCann, Amanda H; Landberg, Goran; Duffy, Michael J; Jirstrom, Karin; Gallagher, William M
2008-01-01
Manual interpretation of immunohistochemistry (IHC) is a subjective, time-consuming and variable process, with an inherent intra-observer and inter-observer variability. Automated image analysis approaches offer the possibility of developing rapid, uniform indicators of IHC staining. In the present article we describe the development of a novel approach for automatically quantifying oestrogen receptor (ER) and progesterone receptor (PR) protein expression assessed by IHC in primary breast cancer. Two cohorts of breast cancer patients (n = 743) were used in the study. Digital images of breast cancer tissue microarrays were captured using the Aperio ScanScope XT slide scanner (Aperio Technologies, Vista, CA, USA). Image analysis algorithms were developed using MatLab 7 (MathWorks, Apple Hill Drive, MA, USA). A fully automated nuclear algorithm was developed to discriminate tumour from normal tissue and to quantify ER and PR expression in both cohorts. Random forest clustering was employed to identify optimum thresholds for survival analysis. The accuracy of the nuclear algorithm was initially confirmed by a histopathologist, who validated the output in 18 representative images. In these 18 samples, an excellent correlation was evident between the results obtained by manual and automated analysis (Spearman's rho = 0.9, P < 0.001). Optimum thresholds for survival analysis were identified using random forest clustering. This revealed 7% positive tumour cells as the optimum threshold for the ER and 5% positive tumour cells for the PR. Moreover, a 7% cutoff level for the ER predicted a better response to tamoxifen than the currently used 10% threshold. Finally, linear regression was employed to demonstrate a more homogeneous pattern of expression for the ER (R = 0.860) than for the PR (R = 0.681). In summary, we present data on the automated quantification of the ER and the PR in 743 primary breast tumours using a novel unsupervised image analysis algorithm. This novel approach provides a useful tool for the quantification of biomarkers on tissue specimens, as well as for objective identification of appropriate cutoff thresholds for biomarker positivity. It also offers the potential to identify proteins with a homogeneous pattern of expression.
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András
2018-02-01
As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-07-01
Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan
Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3Dmore » image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D–2D image registration. Conclusions: Because the tested methods perform simultaneous C-arm calibration and 3D–2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors’ main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.« less
Ferritin heavy chain as a molecular imaging reporter gene in glioma xenografts.
Cheng, Sen; Mi, Ruifang; Xu, Yu; Jin, Guishan; Zhang, Junwen; Zhou, Yiqiang; Chen, Zhengguang; Liu, Fusheng
2017-06-01
The development of glioma therapy in clinical practice (e.g., gene therapy) calls for efficiently visualizing and tracking glioma cells in vivo. Human ferritin heavy chain is a novel gene reporter in magnetic resonance imaging. This study proposes hFTH as a reporter gene for MR molecular imaging in glioma xenografts. Rat C6 glioma cells were infected by packaged lentivirus carrying hFTH and EGFP genes and obtained by fluorescence-activated cell sorting. The iron-loaded ability was analyzed by the total iron reagent kit. Glioma nude mouse models were established subcutaneously and intracranially. Then, in vivo tumor bioluminescence was performed via the IVIS spectrum imaging system. The MR imaging analysis was analyzed on a 7T animal MRI scanner. Finally, the expression of hFTH was analyzed by western blotting and histological analysis. Stable glioma cells carrying hFTH and EGFP reporter genes were successfully obtained. The intracellular iron concentration was increased without impairing the cell proliferation rate. Glioma cells overexpressing hFTH showed significantly decreased signal intensity on T 2 -weighted MRI both in vitro and in vivo. EGFP fluorescent imaging could also be detected in the subcutaneous and intracranial glioma xenografts. Moreover, the expression of the transferritin receptor was significantly increased in glioma cells carrying the hFTH reporter gene. Our study illustrated that hFTH generated cellular MR imaging contrast efficiently in glioma via regulating the expression of transferritin receptor. This might be a useful reporter gene in cell tracking and MR molecular imaging for glioma diagnosis, gene therapy and tumor metastasis.
R, GeethaRamani; Balasubramanian, Lakshmi
2018-07-01
Macula segmentation and fovea localization is one of the primary tasks in retinal analysis as they are responsible for detailed vision. Existing approaches required segmentation of retinal structures viz. optic disc and blood vessels for this purpose. This work avoids knowledge of other retinal structures and attempts data mining techniques to segment macula. Unsupervised clustering algorithm is exploited for this purpose. Selection of initial cluster centres has a great impact on performance of clustering algorithms. A heuristic based clustering in which initial centres are selected based on measures defining statistical distribution of data is incorporated in the proposed methodology. The initial phase of proposed framework includes image cropping, green channel extraction, contrast enhancement and application of mathematical closing. Then, the pre-processed image is subjected to heuristic based clustering yielding a binary map. The binary image is post-processed to eliminate unwanted components. Finally, the component which possessed the minimum intensity is finalized as macula and its centre constitutes the fovea. The proposed approach outperforms existing works by reporting that 100%,of HRF, 100% of DRIVE, 96.92% of DIARETDB0, 97.75% of DIARETDB1, 98.81% of HEI-MED, 90% of STARE and 99.33% of MESSIDOR images satisfy the 1R criterion, a standard adopted for evaluating performance of macula and fovea identification. The proposed system thus helps the ophthalmologists in identifying the macula thereby facilitating to identify if any abnormality is present within the macula region. Copyright © 2018 Elsevier B.V. All rights reserved.
Nanomaterial characterization through image treatment, 3D reconstruction and AI techniques
NASA Astrophysics Data System (ADS)
Lopez de Uralde Huarte, Juan Jose
Nanotechnology is not only the science of the future, but it is indeed the science of today. It is used in all sectors, from health to energy, including information technologies and transport. For the present investigation, we have taken carbon black as a use case. This nanomaterial is mixed with a wide variety of materials to improve their properties, like abrasion resistance, tire and plastic wear or tinting strength in pigments. Nowadays, indirect methods of analysis, like oil absorption or nitrogen adsorption are the most common techniques of the nanomaterial industry. These procedures measure the change in the physical state while adding oil and nitrogen. In this way, the superficial area is estimated and related with the properties of the material. Nevertheless, we have chosen to improve the existent direct methods, which consist in analysing microscopy images of nanomaterials. We have made progress in the image processing treatments and in the extracted features. In fact, some of them have overcome the existing features in the literature. In addition, we have applied, for the first time in the literature, machine learning to aggregate categorization. In this way, we identify automatically their morphology, which will determine the final properties of the material that is mixed with. Finally, we have presented an aggregate reconstruction genetic algorithm that, with only two orthogonal images, provides more information than a tomography, which needs a lot of images. To summarize, we have improved the state of the art in direct analysing techniques, allowing in the near future the replacement of the current indirect techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, A; Farrell, T; Diamond, K
2014-08-15
Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less
Copper Decoration of Carbon Nanotubes and High Resolution Electron Microscopy
NASA Astrophysics Data System (ADS)
Probst, Camille
A new process of decorating carbon nanotubes with copper was developed for the fabrication of nanocomposite aluminum-nanotubes. The process consists of three stages: oxidation, activation and electroless copper plating on the nanotubes. The oxidation step was required to create chemical function on the nanotubes, essential for the activation step. Then, catalytic nanoparticles of tin-palladium were deposited on the tubes. Finally, during the electroless copper plating, copper particles with a size between 20 and 60 nm were uniformly deposited on the nanotubes surface. The reproducibility of the process was shown by using another type of carbon nanotube. The fabrication of nanocomposites aluminum-nanotubes was tested by aluminum vacuum infiltration. Although the infiltration of carbon nanotubes did not produce the expected results, an interesting electron microscopy sample was discovered during the process development: the activated carbon nanotubes. Secondly, scanning transmitted electron microscopy (STEM) imaging in SEM was analysed. The images were obtained with a new detector on the field emission scanning electron microscope (Hitachi S-4700). Various parameters were analysed with the use of two different samples: the activated carbon nanotubes (previously obtained) and gold-palladium nanodeposits. Influences of working distance, accelerating voltage or sample used on the spatial resolution of images obtained with SMART (Scanning Microscope Assessment and Resolution Testing) were analysed. An optimum working distance for the best spatial resolution related to the sample analysed was found for the imaging in STEM mode. Finally, relation between probe size and spatial resolution of backscattered electrons (BSE) images was studied. An image synthesis method was developed to generate the BSE images from backscattered electrons coefficients obtained with CASINO software. Spatial resolution of images was determined using SMART. The analysis shown that using a probe size smaller than the size of the observed object (sample features) does not improve the spatial resolution. In addition, the effects of the accelerating voltage, the current intensity and the sample geometry and composition were analysed.
Gieraerts, Christopher; Demaerel, Philippe; Van Damme, Philip; Wilms, Guido
2013-01-01
We present a case in which mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes syndrome mimicked the clinical and radiological signs of herpes simplex encephalitis. In a patient with subacute encephalopathy, on computed tomography and magnetic resonance imaging, lesions were present in both temporal lobes extending to both insular regions with sparing of the lentiform nuclei and in both posterior straight and cingulate gyri. Final diagnosis of mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes syndrome was based on biochemical investigations on cerebrospinal fluid, electromyogram, muscle biopsy, and genetic analysis. On diffusion-weighted imaging, diffusion restriction was present in some parts of the lesions but not throughout the entire lesions. We suggest that this could be an important sign in the differential diagnosis with herpes simplex encephalitis.
A novel visual saliency analysis model based on dynamic multiple feature combination strategy
NASA Astrophysics Data System (ADS)
Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao
2017-06-01
The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.
Implementing and validating of pan-sharpening algorithms in open-source software
NASA Astrophysics Data System (ADS)
Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco
2017-10-01
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
Representation of scientific methodology in secondary science textbooks
NASA Astrophysics Data System (ADS)
Binns, Ian C.
The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's description of scientific methodology, if at all. The sample included eight secondary science textbooks from two publishers, McGraw-Hill/Glencoe and Harcourt/Holt, Rinehart & Winston. Data consisted of all student text and teacher text that referred to scientific methodology. Second, all investigations in the textbooks were analyzed. Finally, any images that depicted scientists working were also collected and analyzed. The text analysis and activity analysis used the ethnographic content analysis approach developed by Altheide (1996). The rubrics used for the text analysis and activity analysis were initially guided by the Benchmarks (AAAS, 1993), the NSES (NRC, 1996), and the nature of science literature. Preliminary analyses helped to refine each of the rubrics and grounded them in the data. Image analysis used stereotypes identified in the DAST literature. Findings indicated that all eight textbooks presented mixed views of scientific methodology in their initial descriptions. Five textbooks placed more emphasis on the traditional view and three placed more emphasis on the broad view. Results also revealed that the initial descriptions, examples, investigations, and images all emphasized the broad view for Glencoe Biology and the traditional view for Chemistry: Matter and Change. The initial descriptions, examples, investigations, and images in the other six textbooks were not consistent. Overall, the textbook with the most appropriate depiction of scientific methodology was Glencoe Biology and the textbook with the least appropriate depiction of scientific methodology was Physics: Principles and Problems. These findings suggest that compared to earlier investigations, textbooks have begun to improve in how they represent scientific methodology. However, there is still much room for improvement. Future research needs to consider how textbooks impact teachers' and students' understandings of scientific methodology.
A neural network detection model of spilled oil based on the texture analysis of SAR image
NASA Astrophysics Data System (ADS)
An, Jubai; Zhu, Lisong
2006-01-01
A Radial Basis Function Neural Network (RBFNN) Model is investigated for the detection of spilled oil based on the texture analysis of SAR imagery. In this paper, to take the advantage of the abundant texture information of SAR imagery, the texture features are extracted by both wavelet transform and the Gray Level Co-occurrence matrix. The RBFNN Model is fed with a vector of these texture features. The RBFNN Model is trained and tested by the sample data set of the feature vectors. Finally, a SAR image is classified by this model. The classification results of a spilled oil SAR image show that the classification accuracy for oil spill is 86.2 by the RBFNN Model using both wavelet texture and gray texture, while the classification accuracy for oil spill is 78.0 by same RBFNN Model using only wavelet texture as the input of this RBFNN model. The model using both wavelet transform and the Gray Level Co-occurrence matrix is more effective than that only using wavelet texture. Furthermore, it keeps the complicated proximity and has a good performance of classification.
NASA Astrophysics Data System (ADS)
Khalifa, Intissar; Ejbali, Ridha; Zaied, Mourad
2018-04-01
To survive the competition, companies always think about having the best employees. The selection is depended on the answers to the questions of the interviewer and the behavior of the candidate during the interview session. The study of this behavior is always based on a psychological analysis of the movements accompanying the answers and discussions. Few techniques are proposed until today to analyze automatically candidate's non verbal behavior. This paper is a part of a work psychology recognition system; it concentrates in spontaneous hand gesture which is very significant in interviews according to psychologists. We propose motion history representation of hand based on an hybrid approach that merges optical flow and history motion images. The optical flow technique is used firstly to detect hand motions in each frame of a video sequence. Secondly, we use the history motion images (HMI) to accumulate the output of the optical flow in order to have finally a good representation of the hand`s local movement in a global temporal template.
A method of directly extracting multiwave angle-domain common-image gathers
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun
2017-10-01
Angle-domain common-image gathers (ADCIGs) can provide an effective way for migration velocity analysis and amplitude versus angle analysis in oil-gas seismic exploration. On the basis of multi-component Gaussian beam prestack depth migration (GB-PSDM), an alternative method of directly extracting multiwave ADCIGs is presented in this paper. We first introduce multi-component GB-PSDM, where a wavefield separation is proceeded to obtain the separated PP- and PS-wave seismic records before migration imaging for multiwave seismic data. Then, the principle of extracting PP- and PS-ADCIGs using GB-PSDM is presented. The propagation angle can be obtained using the real-value travel time of Gaussian beam in the course of GB-PSDM, which can be used to calculate the incidence and reflection angles. Two kinds of ADCIGs can be extracted for the PS-wave, one of which is P-wave incidence ADCIGs and the other one is S-wave reflection ADCIGs. In this paper, we use the incident angle to plot the ADCIGs for both PP- and PS-waves. Finally, tests of synthetic examples show that the method introduced here is accurate and effective.
Classification and Recognition of Tomb Information in Hyperspectral Image
NASA Astrophysics Data System (ADS)
Gu, M.; Lyu, S.; Hou, M.; Ma, S.; Gao, Z.; Bai, S.; Zhou, P.
2018-04-01
There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA) transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM) based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.
Statistical characterization of portal images and noise from portal imaging systems.
González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge
2013-06-01
In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
NASA Astrophysics Data System (ADS)
Lewis, Adam D.; Katta, Nitesh; McElroy, Austin; Milner, Thomas; Fish, Scott; Beaman, Joseph
2018-04-01
Optical coherence tomography (OCT) has shown promise as a process sensor in selective laser sintering (SLS) due to its ability to yield depth-resolved data not attainable with conventional sensors. However, OCT images of nylon 12 powder and nylon 12 components fabricated via SLS contain artifacts that have not been previously investigated in the literature. A better understanding of light interactions with SLS powder and components is foundational for further research expanding the utility of OCT imaging in SLS and other additive manufacturing (AM) sensing applications. Specifically, in this work, nylon powder and sintered parts were imaged in air and in an index matching liquid. Subsequent image analysis revealed the cause of "signal-tail" OCT image artifacts to be a combination of both inter and intraparticle multiple-scattering and reflections. Then, the OCT imaging depth of nylon 12 powder and the contrast-to-noise ratio of a sintered part were improved through the use of an index matching liquid. Finally, polymer crystals were identified as the main source of intraparticle scattering in nylon 12 powder. Implications of these results on future research utilizing OCT in SLS are also given.
NASA Astrophysics Data System (ADS)
Raza, Shan-e.-Ahmed; Marjan, M. Q.; Arif, Muhammad; Butt, Farhana; Sultan, Faisal; Rajpoot, Nasir M.
2015-03-01
One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
NASA Astrophysics Data System (ADS)
Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda
2009-11-01
In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.
DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nonino, M.; Cristiani, S.; Vanzella, E.
2009-08-01
We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Image superresolution of cytology images using wavelet based patch search
NASA Astrophysics Data System (ADS)
Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo
2015-01-01
Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.
Person-independent facial expression analysis by fusing multiscale cell features
NASA Astrophysics Data System (ADS)
Zhou, Lubing; Wang, Han
2013-03-01
Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.
Optimal wavelength band clustering for multispectral iris recognition.
Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi
2012-07-01
This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.
MALDI-MS analysis and imaging of small molecule metabolites with 1,5-diaminonaphthalene (DAN).
Korte, Andrew R; Lee, Young Jin
2014-08-01
1,5-Diaminonaphthalene (DAN) has previously been reported as an effective matrix for matrix-assisted laser desorption ionization-mass spectrometry of phospholipids. In the current work, we investigate the use of DAN as a matrix for small metabolite analysis in negative ion mode. DAN was found to provide superior ionization to the compared matrices for MW < ~400 Da; however, 9-aminoacridine (9-AA) was found to be superior for a uridine diphosphate standard (MW 566 Da). DAN was also found to provide a more representative profile of a natural phospholipid mixture than 9-AA. Finally, DAN and 9-AA were applied for imaging of metabolites directly from corn leaf sections. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Shahbeig, Saleh; Pourghassem, Hossein
2013-01-01
Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.
Effect of masking phase-only holograms on the quality of reconstructed images.
Deng, Yuanbo; Chu, Daping
2016-04-20
A phase-only hologram modulates the phase of the incident light and diffracts it efficiently with low energy loss because of the minimum absorption. Much research attention has been focused on how to generate phase-only holograms, and little work has been done to understand the effect and limitation of their partial implementation, possibly due to physical defects and constraints, in particular as in the practical situations where a phase-only hologram is confined or needs to be sliced or tiled. The present study simulates the effect of masking phase-only holograms on the quality of reconstructed images in three different scenarios with different filling factors, filling positions, and illumination intensity profiles. Quantitative analysis confirms that the width of the image point spread function becomes wider and the image quality decreases, as expected, when the filling factor decreases, and the image quality remains the same for different filling positions as well. The width of the image point spread function as derived from different filling factors shows a consistent behavior to that as measured directly from the reconstructed image, especially as the filling factor becomes small. Finally, mask profiles of different shapes and intensity distributions are shown to have more complicated effects on the image point spread function, which in turn affects the quality and textures of the reconstructed image.
Seong, Nak Jong; Kim, Bohyoung; Lee, Sungmin; Park, Hee Sun; Kim, Hyuk Jung; Woo, Hyunsik; Kang, Heung-Sik; Lee, Kyoung Ho
2014-07-01
The purpose of this study was to simulate a mobile consultation in patients with inconclusive diagnosis of appendicitis made by on-call radiologists, as well as to measure the diagnostic confidence and performance of the mobile consultation. Two off-site abdominal radiologists interpreted the CT images from 68 patients (including 29 patients with confirmed appendicitis) on a smart-phone for whom the preliminary CT reports by 25 in-house on-call radiologists were inconclusive. The smartphone readings were compared with the preliminary reports by on-call radiologists and with the original final reports by in-house abdominal radiologists. Heat maps, kappa statistics, Wilcoxon signed-rank tests, and ROC curves were used for data analysis. The heat maps and kappa statistics showed that the smartphone readings were more similar to the final reports than to the preliminary reports. In diagnosing or ruling out appendicitis, the smartphone readings were more confident than the preliminary reports (p ≤ 0.01) and did not significantly differ in diagnostic confidence from the final reports (p ≥ 0.19). The AUCs of the smartphone readings (0.91 and 0.92) did not differ significantly from those of the preliminary (0.85) or final (0.97) reports (p ≥ 0.09). With the given study sample, the diagnostic performance of the off-site smartphone readings did not differ significantly from that of the in-house preliminary reports. However, the smartphone readings provided higher diagnostic confidence than the preliminary reports.
Magnetic resonance imaging-ultrasound fusion biopsy for prediction of final prostate pathology.
Le, Jesse D; Stephenson, Samuel; Brugger, Michelle; Lu, David Y; Lieu, Patricia; Sonn, Geoffrey A; Natarajan, Shyam; Dorey, Frederick J; Huang, Jiaoti; Margolis, Daniel J A; Reiter, Robert E; Marks, Leonard S
2014-11-01
We explored the impact of magnetic resonance imaging-ultrasound fusion prostate biopsy on the prediction of final surgical pathology. A total of 54 consecutive men undergoing radical prostatectomy at UCLA after fusion biopsy were included in this prospective, institutional review board approved pilot study. Using magnetic resonance imaging-ultrasound fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy) and from regions of interest detected by multiparametric magnetic resonance imaging (targeted biopsy). A single radiologist read all magnetic resonance imaging, and a single pathologist independently rereviewed all biopsy and whole mount pathology, blinded to prior interpretation and matched specimen. Gleason score concordance between biopsy and prostatectomy was the primary end point. Mean patient age was 62 years and median prostate specific antigen was 6.2 ng/ml. Final Gleason score at prostatectomy was 6 (13%), 7 (70%) and 8-9 (17%). A tertiary pattern was detected in 17 (31%) men. Of 45 high suspicion (image grade 4-5) magnetic resonance imaging targets 32 (71%) contained prostate cancer. The per core cancer detection rate was 20% by systematic mapping biopsy and 42% by targeted biopsy. The highest Gleason pattern at prostatectomy was detected by systematic mapping biopsy in 54%, targeted biopsy in 54% and a combination in 81% of cases. Overall 17% of cases were upgraded from fusion biopsy to final pathology and 1 (2%) was downgraded. The combination of targeted biopsy and systematic mapping biopsy was needed to obtain the best predictive accuracy. In this pilot study magnetic resonance imaging-ultrasound fusion biopsy allowed for the prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% vs 40% to 65%). If confirmed, these results will have important clinical implications. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
ESO imaging survey: optical deep public survey
NASA Astrophysics Data System (ADS)
Mignano, A.; Miralles, J.-M.; da Costa, L.; Olsen, L. F.; Prandoni, I.; Arnouts, S.; Benoist, C.; Madejsky, R.; Slijkhuis, R.; Zaggia, S.
2007-02-01
This paper presents new five passbands (UBVRI) optical wide-field imaging data accumulated as part of the DEEP Public Survey (DPS) carried out as a public survey by the ESO Imaging Survey (EIS) project. Out of the 3 square degrees originally proposed, the survey covers 2.75 square degrees, in at least one band (normally R), and 1.00 square degrees in five passbands. The median seeing, as measured in the final stacked images, is 0.97 arcsec, ranging from 0.75 arcsec to 2.0 arcsec. The median limiting magnitudes (AB system, 2´´ aperture, 5σ detection limit) are UAB=25.65, BAB=25.54, VAB=25.18, RAB = 24.8 and IAB =24.12 mag, consistent with those proposed in the original survey design. The paper describes the observations and data reduction using the EIS Data Reduction System and its associated EIS/MVM library. The quality of the individual images were inspected, bad images discarded and the remaining used to produce final image stacks in each passband, from which sources have been extracted. Finally, the scientific quality of these final images and associated catalogs was assessed qualitatively by visual inspection and quantitatively by comparison of statistical measures derived from these data with those of other authors as well as model predictions, and from direct comparison with the results obtained from the reduction of the same dataset using an independent (hands-on) software system. Finally to illustrate one application of this survey, the results of a preliminary effort to identify sub-mJy radio sources are reported. To the limiting magnitude reached in the R and I passbands the success rate ranges from 66 to 81% (depending on the fields). These data are publicly available at CDS. Based on observations carried out at the European Southern Observatory, La Silla, Chile under program Nos. 164.O-0561, 169.A-0725, and 267.A-5729. Appendices A, B and C are only available in electronic form at http://www.aanda.org
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Behind him at right is Mike Rein, External Affairs division chief. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Calipso's Mission Design: Sun-Glint Avoidance Strategies
NASA Technical Reports Server (NTRS)
Mailhe, Laurie M.; Schiff, Conrad; Stadler, John H.
2004-01-01
CALIPSO will fly in formation with the Aqua spacecraft to obtain a coincident image of a portion of the Aqua/MODIS swath. Since MODIS pixels suffering sun-glint degradation are not processed, it is essential that CALIPSO only co- image the glint h e portion of the MODIS instrument swath. This paper presents sun-glint avoidance strategies for the CALIPSO mission. First, we introduce the Aqua sun-glint geometry and its relation to the CALIPSO-Aqua formation flying parameters. Then, we detail our implementation of the computation and perform a cross-track trade-space analysis. Finally, we analyze the impact of the sun-glint avoidance strategy on the spacecraft power and delta-V budget over the mission lifetime.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
Hattab, Georges; Schlüter, Jan-Philip; Becker, Anke; Nattkemper, Tim W.
2017-01-01
In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate). In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-)automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking). Image alignment faces two obstacles in this microscopic context: (a) highly dynamic structural changes in the sample (i.e., colony growth) and (b) an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Vi)sual (C)ues based (A)daptive (R)egistration, for such microfluidics experiments, consisting of (1) the detection of particular polygons (outlined and segmented ones, referred to as visual cues), (2) the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3) an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm. PMID:28620411